Tom Sercu
computational unitFinite Element study of a nonlinear medium as a
Academiejaar 2012-2013Faculteit Ingenieurswetenschappen en ArchitectuurVoorzitter: prof. dr. ir. Jan Van CampenhoutVakgroep Elektronica en Informatiesystemen
Master in de ingenieurswetenschappen: toegepaste natuurkundeMasterproef ingediend tot het behalen van de academische graad van
Begeleiders: Ken Caluwaerts, dr. ir. Michiel Hermans, Juan Pablo CarbajalPromotoren: prof. dr. ir. Benjamin Schrauwen, prof. dr. ir. Joni Dambre
Tom Sercu
computational unitFinite Element study of a nonlinear medium as a
Academiejaar 2012-2013Faculteit Ingenieurswetenschappen en ArchitectuurVoorzitter: prof. dr. ir. Jan Van CampenhoutVakgroep Elektronica en Informatiesystemen
Master in de ingenieurswetenschappen: toegepaste natuurkundeMasterproef ingediend tot het behalen van de academische graad van
Begeleiders: Ken Caluwaerts, dr. ir. Michiel Hermans, Juan Pablo CarbajalPromotoren: prof. dr. ir. Benjamin Schrauwen, prof. dr. ir. Joni Dambre
Permission for usage
The author gives permission to make this master dissertation available for consultation
and to copy parts of this master dissertation for personal use.
In the case of any other use, the limitations of the copyright have to be respected,
in particular with regard to the obligation to state expressly the source when quoting
results from this master dissertation.
De auteur geeft de toelating deze masterproef voor consultatie beschikbaar te stellen en
delen van de masterproef te kopiren voor persoonlijk gebruik.
Elk ander gebruik valt onder de beperkingen van het auteursrecht, in het bijzonder met
betrekking tot de verplichting de bron uitdrukkelijk te vermelden bij het aanhalen van
resultaten uit deze masterproef.
Tom Sercu, June 2013
ii
Acknowledgements
In the first place I want to thank my supervisors for the discussions and valuable input.
Professor Dambre for her continuing support and guidance, Ken for the hacking and
computer technical magic, Michiel for his physics perspective, Juan Pablo for his exper-
tise in seemingly everything and his mentorship. I want to thank Professor Verhegghe
for his time and the insights about finite element analysis.
Secondly I want to thank my parents for their support, my friends Valentijn, Hannah,
Thomas and Ernest for the company and the enjoyable moments. Finally, a special
word of thanks goes to my fiancee Emma for her unconditional support, even when the
simulations didn’t work out.
Tom Sercu, June 2013
iv
Finite Element study of a nonlinear medium as acomputational unit
by Tom Sercu
Masterproef ingediend tot het behalen van de academische graad van
Master in de ingenieurswetenschappen: toegepaste natuurkunde
Academiejaar 2012-2013
Promotoren: prof. dr. ir. Benjamin Schrauwen, prof. dr. ir. Joni Dambre
Begeleiders: Ken Caluwaerts, dr. ir. Michiel Hermans, Juan Pablo Carbajal
Vakgroep Elektronica en Informatiesystemen
Voorzitter: prof. dr. ir. Jan Van Campenhout
Faculteit Ingenieurswetenschappen en Architectuur
Universiteit Gent
Keywords: Finite element analysis, elastic waves, physical reservoir, memory capacity
Finite Element study of a nonlinear medium as acomputational unit
Tom Sercu
Supervisors: prof. dr. ir. Benjamin Schrauwen, prof. dr. ir. Joni Dambre
Abstract— This thesis frames in recent developments in embodied com-putation and physical implementations of reservoir computing. With Cal-culiX as FEA tool we perform an analysis of a slab of elastic material. Thegoal is to describe it as a physical reservoir. As input signal we use a forceacting on the bottom of the slab; the displacements at the opposite sideserve as readout. A first observation is that the elastic system maps fre-quencies to profiles of vibration amplitude. Certain eigenfrequencies serveas information-carriers as they induce high amplitude responses and aremore robust to noise. This leads us to propose frequency encoding as amethod to encode a discrete-time data signal on the continuous time elasticsystem, conserving the natural language of the elastic system. With classi-fication experiments we explore the relation between the material proper-ties and the encoding: stronger material damping and using high-responseeigenfrequencies drastically improve the detection time. We use the reser-voir computing concept of linear memory capacity to quantify the informa-tion processing capacity of the system, concluding that there is an optimaldamping value where the system has interesting dynamics.
Keywords— Finite element analysis, elastic waves, physical reservoir,memory capacity
I. INTRODUCTION
RESERVOIR computing is a fairly novel technique in train-ing recurrent neural networks (RNNs). The RNN is not
trained but is instead used as a black box nonlinear dynamicalsystem, a “reservoir”. The feedback in the RNN causes it to havememory: an input spreads around in the network and oscillatesfor a certain time before fading out. These rich dynamics in thehigh-dimensional system (having many nodes) can be used forcomputation by attaching a linear readout layer that is trained toperform a certain task with the reservoir input.
Recent developments in reservoir computing [1] suggest thatcomputation is a fundamental property of dynamical systemswith fading memory. This theoretical insight was developed ininterplay with some recently developed physical systems that actas reservoirs. The first implementation of a physical reservoirused water surface waves to perform a nonlinear XOR task [2].Also recent photonics implementations [3] are a promising ex-ample, where the nonlinearity of the photonics components areharnessed to construct a reservoir. Finally also mechanical sys-tems can be seen to perform computation, coined with the termmorphological computation. An example of the implementationof a discrete mechanical reservoir is the tensegrity spring-masssystem described in [4].
In this thesis we explore how a continuum elastic materialcan be described as a physical reservoir. We will first give ashort overview of the finite element analysis using CalculiX, in-troduce our slab simulation setup and discuss the key insights inhow the elastic system can be used to process information.
Reservoir Lab, Ghent University (UGent), Gent, Belgium. E-mail:[email protected]
II. FINITE ELEMENT ANALYSIS
The finite element method is the most popular numericalmethod in engineering and science for solving elastic deforma-tion problems [5]. Elastic problems are defined by the geometryof the volume, the material properties, the boundary conditionsand the external forces (the load). It solves the stress-strainequations by discretizing the geometry in small volumes (ele-ments), and solving for the displacements at the nodes wherethe elements connect. The displacement over the volume is thenobtained by interpolating between the nodes. With the finite el-ement method one can define time-dependent loading and solvethe dynamic problem, where inertia is taken into account.
For our application, two key aspects the simulation needs tocover in order to behave physically realistic are nonlinearity anddamping. Nonlinearity means that scaling the input loading witha factor α will not cause a scaling of the displacements with thisfactor α. It is important from an RC point of view as it enablessignals to influence each other and act as a nonlinear reservoir.Damping in dynamical simulations is the second key aspect. Inany physical system vibrations will damp out due to complexdissipative effects. From an RC point of view, damping is anessential condition to have fading memory.
The tool used for this elastic study was CalculiX [6]; themodel definition was done using pyFormex. CalculiX is an opensource finite element solver, offering the same input syntax andsome of the functionality of the popular commercial packageAbaqus. The two key aspects however proved to be problem-atic. Firstly, material damping is not yet implemented for di-rect integration dynamics in CalculiX 2.5. Therefore we im-plemented material damping with an ad-hoc solution. We willdescribe the damping with the damping time τd as the timescaleof the exponential decay of vibrations. A large τd means lowdamping, small values for τd means a highly damped system.Secondly, nonlinearity was abandoned due to various technicaldifficulties. More fundamentally, we assessed that any form ofdirect nonlinear dynamic calculation is not feasible to use for along (reservoir) timescale. Either an approximate nonlinear se-ries expansion or massive parallelization of the simulation mightbe solutions for this.
In this thesis we concluded to describe our elastic system asan LTI system. This means only one impulse response needs tobe determined by FEM simulation. Further simulations can besubstituted by a convolution of the input signal with the impulseresponses, which can be done extremely efficiently by the FFTmethod. We introduce the transfer function H(f, x) to visualizethe frequency response amplitude in function of the frequencyand the position (node) on the top side.
III. EXPERIMENTS
A. Slab setup
F (t) = Fmaxu(t)x
yQy,top(t) = ∆ytop(x, t)
y(x, t)
Fig. 1. Sketch of the slab setup.
The setup we studied in this thesis is a very simple one: a 2Drectangular slab is loaded at the bottom side with a time-varyingforce that acts in the plane, see figure 1. The y-displacements atthe upper side (noted as y(x, t)) are used as readout. The forceis a scaled version of the input signal u(t), which is a continuoustime signal.
One should picture the effect of u(t) as generating elastic lon-gitudinal waves, forming complex stress and displacement pat-terns. The state of the system is only known through the limitedobservation of the displacements on the top side. We introducethe root mean square (RMS) profile yRMS(x) as an importantmeasure for readout.
yRMS(x) =
(1
∆t
∫ t0+∆t
t0
(y(x, t))2 dt
) 12
(1)
The integration typically runs over 10 ms or 20 ms, to captureat least one fluctuation of a slow 100 Hz wave. yRMS(x) isalways positive and can be seen as the instantaneous amplitudeof vibration of each node.
B. Steady state frequencies
A first experiment was the calculation of the eigenmodes andeigenfrequencies, and an analysis of the transfer function. Theseare the vibrations that can occur “for free” in the undampedsystem, without external force. With each frequency, a spe-cific shape of vibration is connected. Higher frequencies have ashorter wavelength, thus corresponding to an RMS profile withmore nodes and anti-nodes.
Secondly, we performed full dynamic damped simulationswhere u(t) = sin(2πft) and f takes values in a range of differ-ent frequencies. Figure 2 shows examples of RMS profiles forthese different frequencies. These RMS plots could equivalentlybe obtained from the transfer function modulus |H(fc, x)| for afixed frequency. This is a first important observation: the elasticsystem maps frequencies to profiles of vibrational amplitude.
A second observation for steady state frequencies is basedon Hav(f), containing for each frequency the geometric mean
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
yRMS
(mm
)
155 Hz
252 Hz
296 Hz
355 Hz
Fig. 2. RMS profiles for fixed frequencies.
of |H(f, x)| over the nodes. This average transfer function re-veals that some eigenfrequencies (not all of them) have a muchhigher average response amplitude. We will use these later asinformation-carrying frequencies.
C. Direct vs frequency encoding
In order to describe the system as a reservoir, we want to en-code discrete time signals on the system. A first idea might beto encode a discrete data signal directly to the force input u(t).This idea has serious flaws however. Firstly, an artificial sam-pling time has to be chosen. Choosing the sampling time largewould mean that all transients die out and a static, uninterestingprofile would be detected. Choosing the sampling time smallmeans a highly varying u(t), which indeed causes vibrationsand an interesting y(x, t) profile. However, this profile is in-duced by the transients caused by the discrete transitions, ratherthan the actual values of the data signal. The extent to which theoriginal data signal in u(t) can be reconstructed is poor.
In this thesis we propose an alternative way of encoding adiscrete signal on the elastic slab system: frequency encoding.This means we encode the discrete data signal f(n) on the fre-quency: f(t) = f(n) ; nTh < t < (n+ 1)Th. Th is the holdtime of the frequency. This is encoded on the slab input asu(t) = sin
(2π∫ t
0f(τ)dτ
). As readout we take the RMS pro-
file over an integration window ∆tI = 0.02 s right before thefrequency jump.
The next subsection discusses the experiments we performedto estimate the properties of the slab system as a reservoir.
D. Detection time
The first experiment quantizes the dynamics of the slab sys-tem. We sampled the data signal f(n) from a discrete set of fre-quencies and trained a set of binary linear classifier with “win-ner takes all” to detect the frequency based on the RMS profile.We then investigated the detection time Td, the time it takes toclassify a new frequency after the switch, in function of signalto noise ratio η, damping and the choice of the discrete set offrequencies.
The first observation is that the detection times are affectednegatively (longer Td) by noise, but saturate at a point depen-
dent on the frequency (typically η = 1/5 for high-throughputeigenfrequencies). The second observation is the influence ofthe set of detection frequencies: by choosing the frequencies ashigh-throughput frequencies (peak frequencies) the amount ofnoise that can be endured is much higher. The saturation Tdfor the best eigenfrequencies is reached for signal to noise ratioη ≈ 1/10 while for other frequencies saturation is only reachedat η = 1.
10−1 100
Damping time τd (s)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Det
ecti
on
tim
eTd
(s)
Average Td
f=98.0 Hz
f=298.8 Hz
f=489.5 Hz
Fig. 3. Detection time in function of damping. High damping is small τd.
Finally the damping plays a crucial role. Figure 3 shows thedetection times in function of material damping, where the fre-quencies are sampled from the set of peak frequencies. Here wesee that for the highest damping, the average detection time isreduced to 20.6 ms. This is only about two periods of the lowestfrequency (98 Hz) and about two times the roundtrip time for awave to travel back and forth in the slab.
E. Slab as a reservoir
The second experiment uses the memory function and mem-ory capacity concepts [7]. The memory function m(k) is thecorrelation between the time-delayed signal f(n−k) and the op-timal linear regression estimator based on the RMS profile. Thememory capacity MC =
∑∞k=0m(k) is a measure for the total
short term linear memory in the system, and thus for the rich-ness of the system. For this experiments we encode a sequenceof frequencies f(n), i.i.d. sampled with a uniform distributionon the interval [100,600] Hz.
Figure 4 shows the memory function for different dampingvalues. We see that for high damping, the memory functionhas a higher initial peak to almost 1.0. This means that withlinear regression on the RMS profile yRMS(n), we can almostperfectly estimate the input frequency f(n). At the other hand,higher damping means the memory function has a shorter ex-tent, meaning that the system has shorter memory.
The total memory capacity sums over all delays k and showsa peak for intermediate damping τd = 0.241 s. At this point thesystem has the richest dynamics. The total memory capacity isstill very low, which might be caused by the RMS integrationwhich
0 2 4 6 8 10
Delay k
0.0
0.2
0.4
0.6
0.8
1.0
m(k
)
τd = 0.029 s
τd = 0.058 s
τd = 0.118 s
τd = 0.241 s
τd = 0.491 s
τd = 1.000 s
Fig. 4. Memory function. High damping is small τd.
IV. DISCUSSION
We propose a method to use a linear elastic system, with a sig-nal applied as a time-varying force, as a discrete reservoir. Thisis done by encoding a data signal in the frequency of the signal,and reading out the amplitude of the response vibrations on theopposite side. The reasons to opt for this encoding is to use thenatural language of the system, elastic waves, for informationprocessing.
We shortly investigated the properties of the system with thisencoding as a reservoir, and obtained a plausible relation be-tween the damping and the memory function. The memory ca-pacity is low, but for zero delay linear regression can reconstructthe input frequency almost perfectly.
In future research, the first interesting path to explore is therelation between the different parameters and the memory ca-pacity as a measure for the richness of the system. Notably de-creasing the hold time Th and increasing the driving frequenciesand the FEM mesh density is expected to improve the memorycapacity. Also varying the readout to a number of discrete time-points per sample might lead to strong improvements.
This work was exploratory, and many extensions are possi-ble. One possibility is to expand to nonlinear materials, when itis computationally feasible. Also the extension to different ge-ometries and different boundary conditions is an interesting pathto explore.
REFERENCES
[1] Joni Dambre, David Verstraeten, Benjamin Schrauwen, and Serge Massar,“Information processing capacity of dynamical systems.,” Sci. Rep., vol. 2,pp. 514, Jan. 2012.
[2] Chrisantha Fernando and Sampsa Sojakka, “Pattern recognition in abucket,” Advances in Artificial Life, 2003.
[3] Y Paquot, F Duport, a Smerieri, J Dambre, B Schrauwen, M Haelterman,and S Massar, “Optoelectronic reservoir computing.,” Scientific reports,vol. 2, pp. 287, Jan. 2012.
[4] Ken Caluwaerts, Michiel D’Haene, David Verstraeten, and BenjaminSchrauwen, “Locomotion without a brain: physical reservoir computingin tensegrity structures,” Artificial life, vol. 19, no. 1, 2013.
[5] K J Bathe, Finite element procedures, Prentice Hall, 1996.[6] Guido Dhondt, Calculix CrunchiX User’s Manual version 2.5, 2.5 edition,
2012.[7] H Jaeger, “Short term memory in echo state networks,” Tech. Rep., 2002.
Contents
Acknowledgements iv
Overview vi
Extended Abstract vii
Table of Contents x
Symbols xii
1 Introduction 1
2 Finite element analysis of elastic wave problems 5
2.1 Finite Element Method for solid mechanics . . . . . . . . . . . . . . . . . 5
2.1.1 Introduction to finite element formulation of linear elastic problems 6
2.1.2 Element types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.3 Nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Time-dependent FEA of elastic wave problems . . . . . . . . . . . . . . . 15
2.2.1 Dynamic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Direct integration of the dynamic equations . . . . . . . . . . . . . 17
2.2.3 Eigenmodes and modal analysis . . . . . . . . . . . . . . . . . . . . 20
2.3 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 pyFormex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 CalculiX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Simulation details 26
3.1 Simulation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.1 Slab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.2 Input and output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.1.3 Loading profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.4 Infeasibility of contact simulations . . . . . . . . . . . . . . . . . . 32
3.1.5 Units and materials . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.6 Input signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Workflow of python analysis framework . . . . . . . . . . . . . . . . . . . 36
3.3 Implementation of material damping . . . . . . . . . . . . . . . . . . . . . 37
3.3.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2 Damping time constant of impulse response . . . . . . . . . . . . . 38
3.3.3 Damping experiments . . . . . . . . . . . . . . . . . . . . . . . . . 41
x
Contents xi
3.4 Convergence experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Convergence experiments and measures . . . . . . . . . . . . . . . 43
3.4.2 Mean error in function of mesh size . . . . . . . . . . . . . . . . . . 46
3.4.3 Time evolution of error . . . . . . . . . . . . . . . . . . . . . . . . 48
4 Slab experiments 50
4.1 Eigenmode analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Steady state profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.1 Analyzing steady state . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2.2 Frequency dependence . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.3 Influence of damping . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2.4 Influence of source position . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Attempt at geometrical nonlinearity . . . . . . . . . . . . . . . . . . . . . 58
4.4 Validation of linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4.1 Additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4.2 Transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4.3 Convolution approximation . . . . . . . . . . . . . . . . . . . . . . 64
5 Computational properties of the linear system 66
5.1 Memory capacity of elastic LTI system . . . . . . . . . . . . . . . . . . . . 67
5.1.1 Memory capacity introduction . . . . . . . . . . . . . . . . . . . . 67
5.1.2 Memory function plots . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1.3 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2 Frequency encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3 Spectral sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3.1 Average transfer function . . . . . . . . . . . . . . . . . . . . . . . 73
5.3.2 Nodal sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.3 Spectral sensitivity colorplot . . . . . . . . . . . . . . . . . . . . . 75
5.4 Dynamical classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.4.1 Classification and detection time . . . . . . . . . . . . . . . . . . . 77
5.4.2 Influence of noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.4.3 Eigenfrequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4.4 Influence of damping . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.5 Memory capacity for frequency encoded signals . . . . . . . . . . . . . . . 83
6 Conclusion 86
A Memory function of LTI systems 88
Bibliography 93
List of Figures 96
List of Tables 99
Symbols
dt Simulation increment size s
L Length of the side of the slab mm
w Half width of the loading area mm
β Velocity damping parameter [-]
τd Material damping time constant s
X Node displacements vector mm
∆xi, ∆yi, ∆zi Node displacements of node i mm
u(t) Input signal in function of time [-] or µN
U(ω) Input signal spectrum µNs
h(t) Impulse response µN(mm s)−1
H(ω,xR,xS,Ω) Transfer function µNmm−1
Th Hold time s
Td Detection time s
m(k) Memory function [-]
MC Memory capacity [-]
xii
Chapter 1
Introduction
This thesis is a finite element study of how a nonlinear elastic material can be used
to perform computation. The nonlinear elastic solid material will define the physical
language: information propagates through elastic waves. The notion of computation is
defined at the hand of two key properties: memory and nonlinearity.
The aim is to first investigate the possibilities of finite element analysis as a tool for
modeling a nonlinear elastic system. Secondly, an elastic system is defined and its
physical properties determined. Finally, we investigate in what way the system can
do computation at the hand of the interpretation of the dynamical system with the
Reservoir Computing paradigm.
Reservoir computing
Reservoir computing is a fairly novel technique in training recurrent neural networks
(RNN), introduced independently in different works [1, 2, 3] and experimentally com-
bined [4]. Recurrent neural networks are a powerful tool in machine learning, offering
the ability to model highly nonlinear dynamical systems and solving complex tasks in
science and engineering. However, training of recurrent neural networks is problematic
and this largely hinders the use of RNNs in practical applications. The reservoir ap-
proach has been introduced as an alternative, in which the recurrent neural network is
not trained. Rather it is used as a black-box nonlinear dynamical system, the “reser-
voir”, combined with a linear readout layer. This gives the advantage of using the rich
nonlinear high-dimensional dynamics combined with the ease of training a linear read-
out on the instantaneous system state. Applications have proven successful in noise
modeling of chaotic systems [5], speech recognition [6], timeseries prediction, . . .
1
Chapter 1. Introduction 2
In the reservoir computing context, the short term memory capacity, or linear memory
capacity, was proposed as a measure to quantify a dynamical system’s ability to recon-
struct past inputs from the instantaneous output [7]. The total linear memory capacity
is upper bounded by the number of internal states in the network. This upper bound can
be reached for linear systems with internal states that are linear independent, thus for
which the covariance matrix of the state evolutions is full rank. The memory capacity
notion has been formulated for linear networks in discrete time [8] and in continuous
time [9]. Recently, the linear memory capacity concept has been extended to a more
general notion of information processing capacity of dynamical systems [10]. The results
suggest that information processing is a general property of any dynamical system with
fading memory. The result uncovers a fundamental trade-off between linear memory
and nonlinear computation. This context allows uncommon dynamical systems to be
used for computation. The following section describes some of these systems that serve
as framework for this thesis.
Computing with physical systems
Embodied computation
Computation in physical systems has many different aspects and implementations. One
possible point of view is the context of compliant robotics, where the traditional stiff
body parts and joints are replaced by elastic, compliant materials. These (often bio-
inspired) compliant structures give rise to complex nonlinear dynamics. The embodied
computation point of view tries to employ these complex dynamics to simplify the sensor
readout and control of the robot [11, 12]. By using the morphology of a robot’s body
part, the transition of analog to digital (sensors) and back to analog (actuators) can be
avoided or simplified.
This is the framework by which the original starting point of this thesis was inspired. A
recent paper [13] described the use of a finger-like tactile sensor, the BioTac, to classify
between a large dataset of surface textures. This classification was based on the high-
frequency vibrations (100-400 Hz) generated by stroking movements, supposedly the
same mechanism as in human fingertips [14, 15, 16, 17]. It has also been suggested [18]
that fingerprints play an important role in generating and amplifying these vibrations.
This is an interesting example of embodied computation in a continuum elastic system:
the morphology of the fingertip is optimized for the task it performs. Some specific
research questions are: in the first place, how does the morphology of the fingertip
enhance the vibrations? Secondly, which measures can be proposed to quantify the
Chapter 1. Introduction 3
role of the material properties in the classification process. Thirdly, can the material
properties like density and shape be optimized for better classification? This example
system also introduces vibrations, or elastic waves, as the natural language of conveying
information in an elastic continuous system. An assessment of feasibility of a finite
element simulation and study of this kind of system is given in 3.1.4.
Physical reservoirs
Reservoirs were first described as RNNs, but have been implemented in various physical
systems. The first implementation of a physical reservoir used water surface waves to
perform preprocessing [19]. A promising example are the photonics implementations de-
scribed in [20, 21, 22], where the nonlinearities of the photonic components are harnessed
to obtain the nonlinear dynamics of the reservoir. Also, initial steps have been made
to use reaction diffusion systems to perform computation [10, 23]. A recent example of
using a mechanical system as a reservoir is the tensegrity spring-mass system described
in [24, 25].
The advantage of these physical implementations, in the first case the photonic imple-
mentation, is that they are inherently parallel and can therefore be very fast to perform
the kind of parallel processing done in a reservoir. Physical implementations of reser-
voirs are mainly applicable in situations where a certain error can be tolerated and this
inexactness is outweighed by gains in robustness or speed by avoiding conversions from
analog to digital and back.
Elastic reservoir
This thesis operates on the intersection of physical reservoir computing and embodied
computation. The goal we defined in this research is to investigate if an elastic continuous
system can operate as a reservoir. In the second place the question can be posed whether
the morphology can be optimized to change the reservoir properties. The natural way
this system contains and processes information is through elastic waves, or vibrations.
A possible application of elastic reservoirs is in robot sensing: by embedding sensors in
an optimized compliant elastic medium, the readout may be simplified. By using the
nonlinear dynamics, the role of the medium can even be extended to preprocessing and
performing computation with the input history. The advantage of this approach is that a
part of the processing of mechanical signals can be done in the mechanical system itself.
This can give rise to the use of simpler sensors when the compliant medium already
performs a part of the computation.
Chapter 1. Introduction 4
An important difference between elastic wave systems and other reservoir implementa-
tions is the absence of discrete states: elastic wave problems are posed in continuous
time and the state is described with continuous fields of stress and strain. Therefore the
system has infinitely many degrees of freedom, which are artificially reduced to a finite
number of degrees of freedom by discretization with the finite element mesh.
Thesis structure
A large part of the work done during this thesis was the exploration of the finite element
method as a tool for modeling waves in a nonlinear elastic material. Chapter 2 describes
the finite element analysis of elastic wave problems. In section 2.1 we introduce the
mathematical formulation for static (time-independent) problems. Section 2.2 expands
this to dynamic (time-dependent) problems. In 2.3 we introduce pyFormex and CalculiX,
the tools used in this study.
Chapter 3 describes the technicalities of the finite element simulations. In section 3.1
our experimental setup, the slab, is introduced. Section 3.2 describes the scripts and
the workflow of the framework. In 3.3 we explain our damping implementation and link
it to conventional damping models in finite element analysis. Finally, 3.4 discusses the
convergence of the calculations in function of the mesh size.
In chapter 4 we explore the properties of the slab as a dynamical system. We discuss its
eigenmodes and eigenfrequencies in 4.1. We extend this idea to steady state simulations
in 4.2, where we analyze what patterns are generated by a pure sinusoidal driving force.
In 4.3 we explore the possibility to drive the elastic system into the geometrical non-
linear regime. Finally we describe the elastic system in the low-force regime as a linear
dynamical system in 4.4.
With the linear description, chapter 5 explores how the slab system can be used for
computation. In 5.1, the memory function is discussed as a measure for quantizing
the fading memory of the damped system. Instead of direct encoding, we propose an
alternative approach of encoding a signal in elastic systems in 5.2: frequency encoding.
In section 5.3 we examine how the slab system processes constant frequency inputs with
the transfer function description. In 5.4 we train a frequency classification task on the
slab system for different signal to noise ratio’s, different damping values and for different
sets of frequencies. In 5.5 we use the slab system with frequency encoding as a reservoir
and compute the memory function and memory capacity. This provides a rudimentary
insight in the role of the different parameters for information processing in the slab
system.
Chapter 2
Finite element analysis of elastic
wave problems
In this chapter we will discuss the simulation method that was used during this thesis.
Section 2.1 gives an introduction to the use of the finite element method for mechanical
problems, and section 2.2 discusses the more specific situation for elastic wave problems.
In section 2.3 we discuss the open source tools that were used to solve the complex
problems at hand: pyFormex to generate the geometric model and CalculiX for the
numerical integration.
2.1 Finite Element Method for solid mechanics
The Finite Element Method is the most widespread numerical method to solve a large
class of partial differential equations and integral equations. These mathematical models
come from a multitude of application domains including solid mechanics, electromag-
netism, heat transfer and fluid dynamics. In this work we will restrict ourselves to solid
mechanics problems. The equations and the numerical solution method are introduced
for the linear elastic problem in subsection 2.1.1. In 2.1.2 we discuss the types of elements
with their most important aspects. Subsection 2.1.3 gives an introduction to nonlinear-
ity in solid mechanics. In the next section 2.2 we will introduce the time-dependent
equations and solution techniques.
Most of the discussion presented here will be based on the books by Bathe [26] and
Zienkiewicz [27] which are the standard references in the field.
5
Chapter 2. FEA of elastic wave problems 6
2.1.1 Introduction to finite element formulation of linear elastic prob-
lems
Figure 2.1: FEM mesh in 2D. Colors indicate different material properties.
Source: http://en.wikipedia.org/wiki/Finite_element_method
In this subsection we will introduce the finite element method as intuitively as possible.
We introduce the method for analysis of static linear elastic problems, a very common
type of analysis and often the basis for more complex models. Although the details of
the finite element method can be extremely delicate, the basic idea is very simple and
intuitive. The geometry of the system is divided into a mesh of discrete elements, with
the elements connected at the nodes. A two-dimensional example is shown in figure 2.1.
Integration of the equations over the elements gives (in the static case) a set of algebraic
equations for the displacements at the nodes.
The analysis of complex physical systems typically happens in four stages:
1. Idealize the system to a set of partial differential equations with boundary condi-
tions.
2. Solve the equations with a numerical technique.
3. Analyze the accuracy of the numerical solution.
4. Interpret the results.
We will briefly introduce the first three stages in the following paragraphs.
Chapter 2. FEA of elastic wave problems 7
Mathematical model
The first stage is the mathematical formulation of linear elastic systems. We consider
an elastic body with volume V and boundary S. The body is subject to two types of
forces: all volumetric forces (like gravity) are bundled in the vector fV and have units
of force per volume. The surface forces (from external loading, for example hydrostatic
pressure) are given by the vector fS. Solving the problem means finding three quantities
in each point x of the body: the displacements u 1, the strains ε and the stresses σ.
The ε strain vector has six components which are defined as different derivatives of the
displacement vector u = (ux, uy, uz). σ is the stress vector, containing the six unique
components of the Cauchy stress tensor. The stress σ is assumed to be related to the
strain by the constitutive law
σ = Cε (2.1)
This constitutive law contains the material properties like the elastic modulus and Pois-
son’s ratio. Figure 2.2 shows two small 2D volumes subjected to either pure normal
stresses or shear stresses.
σ y
ϵ yΔ y
Δ y σ x
Δ xϵ xΔ x
ϵ yxΔ y σ yx
Δ yσ xy
ϵ xyΔ x
Δ x
Figure 2.2: Cauchy stress and strain of a 2D square volume. The left side shows thenormal stresses and normal strains, perpendicular to the volume’s face. At the rightside the volume is deformed by pure shear strain and stresses, parallel to the volume’s
faces.
We immediately introduce the potential energy form which is most convenient for further
manipulations. A continuum elastic problem is solved by finding the absolute minimum
1A note on notation: only in this chapter, we will use u as the notation for the displacement vectorto be consistent with the standard textbooks. In later chapters, u will be used exclusively to indicatethe input signal that is applied as a load to the system, see section 3.1.1
Chapter 2. FEA of elastic wave problems 8
of the potential energy functional
Π =1
2
∫
VεTσdV
︸ ︷︷ ︸elastic potential energy
−∫
VuT fVdV −
∫
SuT fSdS
︸ ︷︷ ︸potential of external loads
(2.2)
Here the first term of the external loads are the volume forces fV like gravity, electro-
magnetic force, centrifugal and Coriolis apparent forces. The second term of the external
loads are the surface forces fS which can be caused by contact interactions, hydrostatic
pressure, . . .
The equation we will actually solve is obtained by calculus of variations applied to this
formula:
δΠ =
∫
VδεTσdV −
∫
VδuT fVdV −
∫
SδuT fSdS = 0 (2.3)
Here the δu and δε are the variations in the displacement and the corresponding strains.
Finite element approximation
To solve equation 2.3 we need a numerical technique, which will be the displacement-
based finite element method in our discussion. The displacement-based finite element
method takes the approach to approximate the volume as an assemblage of discrete
elements which are connected at the nodes. The displacements over the rest of the
considered element are then considered to be a function of the displacements at the
element nodes. For one element (indicated by superscript m) we write:
um(x) = Hm(x)X (2.4)
Here um is the element’s displacement vector with 3 components for the x-,y- and z-
displacement, in function of the position. The nodal displacement vector X is a vector
of length 3N with N the number of nodes in the full mesh. It has the structure:
X = [∆x1 ∆y1 ∆z1 ∆x2 . . . ∆xN ∆yN ∆zN ]T (2.5)
where the subscripts run over all the N nodes in the mesh. As the notation suggests, it
contains the displacement in the 3 directions for each node in the mesh. Hm(x) is the
displacement interpolation matrix of the shape 3 × 3N . The Hm matrix is typically a
linear or quadratic function of x and will be used for further analytical manipulation.
Note that although the X vector contains all the nodes of the mesh, the displacements
in a single element are only function of the adjacent nodes of the element. With the
Chapter 2. FEA of elastic wave problems 9
assumption of the displacements from (2.4), we can also evaluate the corresponding
strains in an element as
εm(x) = Bm(x)X (2.6)
Since the strains are defined as (sums of) derivatives of the displacements, the Bm(x)
matrix is a combination of derivatives of the elements of the Hm(x) matrix. Finally,
also the stresses can be written as a function of the strains and thus in function of the
displacements by inserting the constitutive law (2.1)
σm = Cmεm (2.7)
Here the superscript m of Cm indicates that the elasticity matrix is defined per element,
and different materials can be defined on a mesh.
At this moment we have the appropriate notation to develop the final form of the
equations. A crucial first step is to split up the variation in the elastic potential energy
and potential of external loads over the elements.
δΠ =∑
m
δΠm (2.8)
=∑
m
∫
Vm
δεmTσmdV −∑
m
∫
Vm
δumT fVdV −∑
m
∫
Smbody
δumT fSdS (2.9)
= 0 (2.10)
The designation body in the last term of (2.9) means that the integration only occurs
over the element sides that are at the volume border.
We elaborate the expression for the internal elastic potential energy:
∑
m
∫
Vm
δεmTσmdV = δXT
∑
m
∫
Vm
BmTCmBmdV
︸ ︷︷ ︸Element stiffness matrix km
X (2.11)
= δXTKX (2.12)
In (2.11) the interpolation matrices from the elements are combined with the constitutive
matrices and integrated over the element volume to give the element stiffness matrices.
Each km matrix has size 3N × 3N and contains only nonzero elements where the nodes
of element m are involved. The km determines the elastic response of a single element.
By assembling the element stiffness matrices, the global stiffness matrix K is obtained.
Chapter 2. FEA of elastic wave problems 10
Similarly, the loads from the last 2 terms of (2.9) can be integrated over the elements
and assembled in a global load vector F
Variation of external loads potential (2.13)
= δXT
[∑
m
(∫
Vm
HmT fVdV +
∫
Smbody
HmT fSdS
)+ FC
](2.14)
= δXT
[∑
m
(FVm + FS
m) + FC
]
︸ ︷︷ ︸F
(2.15)
In this expression again the integration per element m only involves the portion of the
external loads that are working on the specific element. In (2.15) the FC term is the
vector specifying the concentrated loads, which are forces that are prescribed for a single
node.
Combining (2.12) and (2.15) and realizing that the δX variation spans the 3N -dimensional
vector space, we obtain the very simple and intuitive form for the static equilibrium of
the elastic material
KX = F (2.16)
Estimation of convergence
The finite element method is a numerical technique for solving complex problems. There-
fore it is an essential part of any finite element analysis to estimate the convergence of
the numerical solution. Intuitively it is clear that by dividing the volume in smaller ele-
ments, a better numerical approximation can be achieved. Mathematically convergence
for increasing mesh density is guaranteed by the monotonic convergence theorem under
certain conditions.
In achieving convergent results by decreasing the mesh size, there is an obvious trade-
off between accuracy and computation time. In explicit analysis (see section 2.2.1) the
computational cost is proportional to the number of elements and roughly inversely
proportional to the smallest element dimension. In a 2D setup with square elements, a
mesh refinement with a factor 2 in all directions will increase the number of elements
with a factor 4. The decrease in smallest element dimension will account for an extra
factor 2. The total cost increase will be a factor 23 = 8. In section 3.4 we analyze the
required accuracy and computation time for the setup used in this thesis.
Chapter 2. FEA of elastic wave problems 11
Figure 2.3: Basic FEM elements. Organized by dimension, shape and degree ofinterpolation function.
Based on: http://illustrations.marin.ntnu.no/structures/analysis/FEM/theory/
index.html
Chapter 2. FEA of elastic wave problems 12
2.1.2 Element types
The practical implementation of the finite element method described in the previous
subsection requires the definition of elements with their interpolation functions. We will
not go into the details of the construction of these interpolation functions. In practice,
this construction is seldom done manually anymore, since they are implemented in finite
element analysis tools and the end user does not have to care about the exact analytical
expression of the interpolation functions or the construction of the K matrices. The
elements can be categorized at the hand of a number of criteria:
• Element dimension (1D, 2D or 3D)
• Shape
• The degree of the interpolation functions (typically linear or quadratic)
• Full or reduced integration
The choice between 1-, 2- or 3-dimensional elements is mostly determined by the ge-
ometry of the studied system: if the system can be represented by lower-dimensional
elements this will be the better choice to reduce the computational cost. 1-dimensional
elements are used to model beams or trusses. 2-dimensional elements are used to model
plane structures like shells or plates. Both 1- and 2-dimensional elements have (in addi-
tion to the node displacements) additional degrees of freedom for bending moments and
shear strain.
In practice, elements take a limited number of geometric shapes, which are pictured in
figure 2.3. In 2 dimensions, the elements can be either quadrilaterals or triangles. In 3
dimensions, brick elements, wedge elements or tetrahedral elements are used. Typically,
the advantage of triangular and tetrahedral elements is automatic meshing [28]. For 2D
geometries, fast automatic triangular meshing algorithms have been developed based on
Delaunay triangulation. This is used for example in most CAD tools and other situations
where the designer is not concerned with manual meshing for optimal performance or
where the geometry is too complex and variable to allow manual meshing. At the other
hand, if manual meshing is feasible, it is often possible to reach the same accuracy with
a smaller number of rectangular or brick elements.
Thirdly, the degree of the interpolation functions is an important consideration in the
choice of the elements. Although the interpolation functions could be any kind of func-
tions, they are typically polynomials. In practice for mechanical engineering applica-
tions, only first-order and second-order polynomials are used, which give rise to linear
and quadratic elements. Figure 2.4 gives an example of the interpolation function of
1-dimensional linear and quadratic interpolation functions. From this figure it can be
Chapter 2. FEA of elastic wave problems 13
Figure 2.4: Graphical representation of the 1D linear (top row) and quadratic (bottomrow) interpolation functions. The quadratic 1D element has 3 nodes.
seen that the linear elements have only nodes at the end points of the 1D element,
while the quadratic element has an extra node in the center to define all coefficients of
the second-order polynomial. In 2 and 3 dimensions, the so-called serendipity elements
are predominant and often the only elements implemented in commercial codes. These
quadratic serendipity elements have 3 nodes on each edge like in 1D, but are missing
nodes in the center of the face or in the volume. The 2D 8-node quadratic rectangular
element is illustrated in figure 2.3.
The choice between quadratic and linear elements is a delicate subject. In most situa-
tions quadratic elements are the first choice, since they will give more accurate results
within the same computation time than linear elements. Furthermore the quadratic
elements have good standard interpolation functions, in contrast to linear elements.
Linear elements display physically incorrect behavior like shear locking and volumetric
locking, meaning that the system’s stiffness is strongly overestimated and the displace-
ment results are orders of magnitude too small. The next paragraph describes reduced
integration which can avoid these locking problems but at the cost of introducing zero-
energy modes, another type of unphysical behavior; deformations can occur that do not
contribute to the energy, they go unnoticed by the system. Zero-energy modes (also
known as spurious modes, hourglassing) occur both in linear and quadratic reduced in-
tegration elements but do not propagate in quadratic elements so they are mainly an
issue with linear elements [27, p. 226]. Therefore, most finite element programs modify
the standard interpolation functions to avoid these problems, but often the modifica-
tions are vendor-specific and unpublished. [29, p. 19] This is the reason that results of
quadratic element calculations are more uniform when comparing over different FEA
programs. However, in some specific situations linear elements are more appropriate if
your software package supports corrections on the linear elements. Example cases are
contact, explicit dynamic time-integration (see 2.2.1) and plastic deformation simula-
tions (which are often very advanced). The mesh of linear elements can be finer than
Chapter 2. FEA of elastic wave problems 14
with quadratic elements while maintaining the same simulation time.
Finally, the numerical integration is a technical aspect. To calculate the integral over an
element, the integral is approximated with a limited number of integration points in the
element (typically between 1 and 4) , and thus a limited number of function evaluations.
Matrices that have to be evaluated by numerical integration are the stiffness matrix
K, the mass matrix M (see 2.2.1) and the force vector R. The choice of these points
and the weight in the approximating sum defines the quadrature: most important for
FEM is the Gauss quadrature, where the integration points and weights are chosen such
that a polynomial of the highest possible order is integrated exactly. For example, in
the case of the two-dimensional 8-node serendipity element, the integrandum for the K
matrix contains the highest order of polynomials, which are of order 4. With a Gaussian
quadrature, this means that 3×3 integration points (thus 9 function evaluations) suffice
to have an exact evaluation of all matrices. This is called full integration: enough
integration points are chosen to evaluate the element integrals exactly. Although this
might seem the only obvious option, surprisingly enough it is more common in practice
to use reduced integration. This means that less integration points are chosen and the
integrals to calculate the matrices are not evaluated exactly. There are two reasons to
take this approach. The first reason is to reduce computation time by reducing the
number of function evaluations. The second reason is the fact that it is empirically
found that with reduced (inexact) integration, more accurate solutions are obtained.
Although this is surprising at first, it is explained by the fact that the discrete element
approximation introduces a systematic overestimation of the stiffness of the system. The
reduced integration introduces an error that systematically opposes this effect, and thus
renders a solution closer to the converged solution. This is especially the case with linear
elements who display phenomena as volumetric locking and shear locking as described
in the previous paragraph.
2.1.3 Nonlinearity
In our introductory subsection 2.1.1 we assumed linearity of the solution, meaning that
the solution X of the equation KX = F is a linear function of the applied load vector F,
i.e. if a load of αF was applied, the response would be αX. This linearity rests on the
assumption that deformations are small, because in that case the integrations can be
performed over the original volumes and thus the K matrix does not change. Another
assumption for linearity to hold, is that the material is linear elastic. This assumption
is essential for the strain-displacement matrix Bm and constitutive matrix Cm in (2.11)
to be independent of the displacements X. A third assumption is that the boundary
conditions are independent of the load. The typical relevant illustration of boundary
Chapter 2. FEA of elastic wave problems 15
conditions changing in function of load is contact: a situation where two volumes makes
contact when they are deformed gives rise to a strongly nonlinear boundary condition.
When one of these assumptions is not met, nonlinearity has to be taken into account.
We can immediately categorize the nonlinearity in three corresponding classes:
Material nonlinearity
The stress-strain relation is nonlinear. Examples are the (irreversible) plastic de-
formation of an elastic-plastic material and the reversible elastic response of a
rubber-type material.
Geometric nonlinearity
Large displacements occur and the K matrix has to be iteratively constructed by
integrating over increasingly deformed elements.
Contact nonlinearity
Abrupt transition between no boundary condition (before contact) and strong
surface forces (with contact)
In nonlinear analysis it is generally not possible to find the solution by solving one set
of equations, like with the linear problem (2.16). Instead, one takes an incremental
approach by applying the load gradually. In a static problem, an artificial time-like
increment parameter t is introduced which typically varies from 0 to 1. The time pa-
rameter controls the intensity with which the external load is applied: the intensity
increases linearly from 0 to the maximal load magnitude. On each of the time points
the equations are solved until the system is in static equilibrium with the instantaneous
partial load. Although in static nonlinear analysis the time parameter should not be
attributed any physical meaning, the solution technique is exactly the same as for the
time-dependent direct integration method of dynamic analysis. The solution per incre-
ment (on each point in time) is found either iteratively in case of implicit analysis or
directly in the case of explicit analysis. The dynamic formulation is introduced in the
next section, and the implicit and explicit methods are discussed in subsection 2.2.2.
2.2 Time-dependent FEA of elastic wave problems
Two popular numerical techniques are used to solve elastic wave problems: the Bound-
ary Element and Finite Element method. The main advantage of the FE method is that
there are numerous general-purpose commercial FE codes available, which eliminates
the need to develop application-specific custom code. These FE programs or related
Chapter 2. FEA of elastic wave problems 16
programs provide the possibility to perform advanced pre- and post-processing. Ex-
tended validation of the use of the finite element method for this kind of problems has
been done [30]. Therefore we settled on using the finite element method as numerical
method in this thesis.
In section 2.1 a general introduction to the finite element analysis of static problems is
given. We gave an overview of the mathematical derivation (2.1.1), an introduction to
the choice of elements (2.1.2) and to nonlinearity (2.1.3). Now we extend our formulation
to include the description of time-dependent problems, with the goal of modeling elastic
waves in solids. In essence the formulation is the same as in the previous section 2.1,
but with an inertia and damping term included with the volume forces. For a system
to behave physically and to possess the property of fading memory as described in the
introduction, the addition of damping to the system is crucial.
2.2.1 Dynamic equations
Equation (2.9) is a statement of static equilibrium of the system as it is approximated by
the element assemblage. If the forces fv and fS vary with time, the node displacements
will also be a function of time and equation (2.9) will describe the equilibrium at any
specific point in time. Assuming the variations are rapid, compared against the eigen-
frequencies of the system, inertia needs to be considered. According to D’ Alembert’s
principle, the inertia forces of the element can simply be included in the volume forces,
which are in the second term of (2.9). Consistent with the notation of (2.15) we call this
part of the load vector the volume load vector FVm
FV =∑
m
∫
Vm
HmT[fV − ρmHmX
]dV (2.17)
We can separate the newly introduced term and introduce the mass matrix of the struc-
ture as
M =∑
m
∫
Vm
ρmHmTHmdV
︸ ︷︷ ︸Mm
(2.18)
Finally, the simple form of the time-dependent equation is
MX(t) + KX(t) = F(t) (2.19)
In this form the resemblance to the harmonic oscillator equations is obvious and provides
a basic intuition in the meaning of the different terms.
Chapter 2. FEA of elastic wave problems 17
Any real physical dynamical system contains damping, which causes energy to dissipate
during vibration. By far the most frequent way to add damping to any physical model
is by introducing a viscous damping term, which is linearly proportional to the velocity.
Formally this term can be introduced the same way as we introduced the inertia term
in (2.17)
FV =∑
m
∫
Vm
HmT[fV − ρmHmX− κmHmX
]dV (2.20)
from which we separate the damping term
∑
m
∫
Vm
κmHmTHmdV
︸ ︷︷ ︸Cm
X = CX (2.21)
Rendering the full dynamical equation
MX(t) + CX(t) + KX(t) = F(t) (2.22)
In practice however, it is infeasible to use (2.21) to construct the damping matrix C since
it is almost never possible to obtain the element damping parameters κm, which are often
frequency dependent. The most common approach to construct C is to approximate it
with a linear combination of the mass and stiffness matrices
C = c1M + c2K (2.23)
This damping model is called Rayleigh damping. It is important to realize that Rayleigh
damping is not an exact damping model, rather an approximation that gives reasonable
results. The Rayleigh damping parameters c1 and c2 are to be determined experimen-
tally, for which a number of methods has been developed [31, 32]. It is also common
to base the damping parameters on the known parameters of a similar structure when
experimental estimation of the parameters is not an option. The two components of the
Rayleigh damping matrix are more important in different regimes: for low-frequency
vibrations the mass matrix M is dominant, high-frequency vibrations are damped pro-
portional to the stiffness matrix K. The Rayleigh damping method is not implemented
in CalculiX 2.5, the FEA program we used (see subsection 2.3.2). Our ad-hoc solution
for the lack of material damping is described in section 3.3.
2.2.2 Direct integration of the dynamic equations
Equation (2.22) can be solved in two different ways: direct integration or modal su-
perposition. We followed the direct integration approach, but in subsection 2.2.3 we
Chapter 2. FEA of elastic wave problems 18
also provide a short overview of the calculation of eigenmodes and modal superposition.
Modal superposition is only applicable on linear dynamics, where it gives the same re-
sult as direct integration but is computationally more efficient. Direct integration at
the other hand is much more generally applicable to short transient dynamics, nonlinear
dynamical systems and can handle systems with plastic (permanent) deformation. The
downside is that the computational cost of this kind of simulations is orders of magnitude
larger than modal superposition calculations.
As mentioned in subsection 2.1.3, to solve the static equation (2.16) in the nonlinear
case or the dynamic equation (2.22) in both the linear and nonlinear case, the same
techniques are used. The difference is that in the static case no inertia or damping
effects are included and the time parameter has no physical meaning. In both cases, the
solution is constructed incrementally with an increment size dt. Assume the solution
X(t) is known at the discrete time t. Then the new load F(t + dt) is applied and the
solution X(t + dt) is required. There are two main integration methods to obtain the
solution at time t+ dt from the solution at t: the implicit and the explicit method.
The implicit method uses the Newton-Raphson method and is thus an iterative method.
One iteration is an attempt at finding an equilibrium solution for one specific point
in time, thus for one particular increment. With each iteration, a solution closer to
equilibrium is found. The iteration is stopped when a convergence criterion is reached.
For each iteration in a nonlinear or dynamic analysis the model’s stiffness matrix is
integrated and assembled and the equilibrium equation (2.22) is solved, which has an
equivalent computational cost of doing a full linear analysis. It should be clear now that
the cost of a nonlinear analysis is many times greater than the cost of a linear analysis.
Implicit methods are unconditionally stable, independent of the step size (or increment
size) dt. In practice however, choosing the step size too big will result in extremely many
iterations. In this case it is better to reduce the step size.
The explicit method is derived from the central difference rule. The term explicit refers
to the fact that the solution at the end of the increment (at time t+dt) is based only on
the displacements, velocities, and accelerations at the beginning of the increment (time
t). Therefore no iterations are needed and the cost per increment is much lower in the
explicit method. In the explicit method, the mass matrix is lumped, or diagonalized.
This makes inversion trivial and thus very efficient. At the other hand, the explicit
method is only conditionally stable; the step size dt has to be smaller than the smallest
element dimension divided by the speed of a dilational wave. This requires the explicit
integration step size to be many times smaller than for implicit integration.
We will summarize the procedure of explicit integration, which is discussed a little more
in-depth in [33, sec 9.2]. The explicit scheme is based on the central difference rule,
Chapter 2. FEA of elastic wave problems 19
where the discretization of the velocities is done on timepoints between the timepoints
where the displacements and accelerations are calculated. Starting from time t and the
(known) velocities at time t − dt/2, we want to calculate the quantities on the next
increment. From the dynamic equilibrium (2.22) we know the acceleration X at time t:
Xt = M−1(F− I)t (2.24)
Where I is the calculated internal stress vector at time t, taking the elastic, inertia and
damping terms into account. This acceleration is used to integrate explicitly through
time, first the velocities at the half-increment, then the displacements:
Xt+dt/2 = Xt−dt/2 + dtXt (2.25)
Xt+dt = Xt + dtXt+dt/2 (2.26)
Now the displacements and velocities are known, and we are ready to evaluate the next
increment. What is left to do is to compute the mass and stiffness matrix, calculate the
strains and stresses and assemble the internal forces vector I.
Finally, the choice of step size (or increment size) dt is an important topic. The step size
is (softly) bounded below by computational considerations: choosing dt too small will
result in unnecessary many integration steps and thus too long computation times. This
is the case for both implicit as explicit methods. At the other hand, the step size has
a hard upper bound in the explicit scheme, since the calculation will become unstable
when choosing dt too large. This will not be the case with the implicit method since it is
unconditionally stable, but with dt too large the number of iterations will be extremely
large and the computation time will rise.
Therefore most direct integration implementations will come with an automatic incre-
mentation adaptation algorithm: dt will be changed on every increment to be as close
to the stability boundary as possible. This attempts to be optimally efficient: as few
integration steps are taken and the algorithm does not exceed the stability bound. In
the description of CalculiX 2.3.2 we discuss the reason we did not work with automatic
incrementation and thus why dt was fixed in our calculations.
A more detailed discussion of the technical implementation of direct integration can
be found for example in [26, p. 485]. For a less technical and more practical oriented
introduction, the Abaqus manual [33, sec 8 and 9] is excellent.
Chapter 2. FEA of elastic wave problems 20
2.2.3 Eigenmodes and modal analysis
The eigenmode analysis starts from
MX + KX = 0 (2.27)
This the standard FEM dynamic equation (2.22) without external force and without
damping. To find the eigenmodes and eigenfrequencies, we make the ansatz that the
solution behaves periodically:
X(x, y, t) = φ(x, y) eiωt (2.28)
Here the dependences on position in the mesh and time are indicated explicitly. The
φ(x, y) is the eigenmode, which should be thought of as an instantaneous picture of the
displacements. Notice that it has no time dependence.
Equation (2.28) gives rise to the generalized eigenvalue equation
Kφ = ω2Mφ (2.29)
The eigenfrequency ω takes an infinite number of values in a typical problem. We will
call the eigenfrequencies ω1, ω2, ω3 etc. Each eigenfrequency ω has a corresponding
displacement vector φ, the eigenmode. The eigenfrequency spectrum has a lower bound:
the fundamental eigenfrequency ω1 with its corresponding fundamental mode φ1. The
eigenspectrum of a system is the reference to determine if a time-dependent phenomenon
goes fast or slow for the system. A signal that varies with a frequency much lower than
the fundamental frequency is slow for the system: the vibrations will not propagate
through the system. The deformation will be quasi-static.
At the other hand, vibrations with a frequency higher than the fundamental frequency
will be able to propagate as waves through the system. When the system is linear
(small displacements, linear material, no contact) the deformations in the structure can
be calculated from a combination of the mode shapes of the structure. In that case the
vector of displacements can be written as
X(x, y, t) =
∞∑
i=1
γi(t)φi(x, y) (2.30)
For linear dynamical problems this expansion can be much more efficient than a full
direct integration. The reason is that the response of a structure is typically dominated
by a relatively small number of modes. For example a realistic case would be a model
Chapter 2. FEA of elastic wave problems 21
containing 10,000 degrees of freedom but where the dynamic linear response is governed
by the first 100 eigenmodes. Then direct integration requires solving 10,000 coupled
equations on each increment, while modal analysis requires solving 100 uncoupled equa-
tions. Modal analysis is however limited to linear problems and was therefore not used
in our work.
2.3 Tools
In section 2.1 we provided a general introduction to the finite element method applied
to solid mechanics problems; section 2.2 extended the formulation to describe time-
dependent phenomena. This section will discuss the tools that we used in this work:
CalculiX as the finite element solver, and pyFormex for the preprocessing. Both tools
are open source and published under a GNU General Public License.
2.3.1 pyFormex
Figure 2.5: Example of an advanced wireframe helix structure generated with py-Formex with a simple script.
Source: http://www.nongnu.org/pyformex/doc/tutorial.html
Chapter 2. FEA of elastic wave problems 22
We used pyFormex for the preprocessing part. Preprocessing for FEA means defining the
system geometry, subdividing the geometry in elements, applying restraining boundary
conditions and loads and assigning material properties.
pyFormex is a preprocessing tool, written in python, to generate large meshes by means
of script-based mathematical transformations of sets of coordinates. This approach is
opposed to a graphical mesh generation through a Graphical User Interface, which seems
to be the standard with commercial CAD and FEM packages. Two main advantages of
a script-based approach are the natural full control over large meshes and the flexibility
to automatically rebuild different versions of a model. pyFormex is developed and es-
pecially suited for the automated design of spatial frame structures, which are modeled
with 1D elements. Nevertheless it is capable of generating many other types of geome-
tries, including 2D and 3D (solid) geometries. pyFormex is written and maintained by
Benedict Verhegghe, professor in structural engineering at Ghent University.
pyFormex is written in python and uses a set of python modules as low-level components
to generate geometries. The Formex class is the most important geometrical object in
pyFormex, intended to actually develop the mesh geometry. Each Formex contains a
set of elements, defined by their nodes and the connections. The Formex class supports
transformations like copying, translation, rotation, skewing, bending, etc. When the
geometry is developed using the Formex class, it can be converted to a Mesh. This con-
version will merge the coincident points and save the elements as connections between
those points. This format is readily exported to the finite element mesh by passing it
through the fe_abq module, while applying boundary conditions and material prop-
erties. The resulting .inp file is then passed on to Abaqus or CalculiX (see the next
section 2.3.2).
Although pyFormex is designed to be run standalone (thus as the master program) in
either graphical mode or script mode, we took a different approach by importing the
necessary pyFormex modules in a standalone python main script. The advantage of this
approach is that it allows to have the full workflow of preprocessing, CalculiX calculations
and post-processing managed by one python script, as described in section 3.2.
2.3.2 CalculiX
We used the open source package CalculiX for the calculations. CalculiX is a finite
element program for structural analysis written in a combination of Fortran and C. It
comes with the solver ccx as core product developed by Guido Dhondt, and a pre-
and post-processing tool cgx for visualization, developed by Klaus Wittig. CalculiX
uses the same input format like Abaqus, one of the most popular commercial packages
Chapter 2. FEA of elastic wave problems 23
for structural finite element analysis. It aims to provide important parts of the same
functionality and a comparable speed like Abaqus. Therefore we will write this section
as a comparison between CalculiX and Abaqus. However, it should be clear from the
beginning that CalculiX is on a different level than Abaqus in terms of size, maturity
and popularity; CalculiX is largely a one-man project while Abaqus was already worth
$400 million in 2005 [34].
CalculiX is given as input an .inp file, which contains all node positions, elements, ma-
terial properties, boundary conditions, loads and the definition of what should actually
be calculated (static analysis, direct integration, eigenmode analysis, etc.). The file is
structured by keyword cards starting with an asterisk, for example this is the definition
of direct explicit integration with the iterative scaling solver:
*DYNAMIC, ALPHA=-0.333, EXPLICIT, DIRECT, SOLVER=iterative scaling
A CalculiX calculation gives three files as output: the .frd, .dat and .sta files. The
.frd file is intended for the CalculiX post-processor cgx. The .dat file is in human-
readable ASCII format and is meant for external post-processing. The .sta file contains
a list of increments and are mainly useful for tracking the status of a calculation while
it is running.
The features that work as expected in CalculiX 2.5 are
• Implicit and explicit direct integration solver
• Geometric nonlinearity
• Boundary conditions, material definition, surface and volume loading
The main limitations we encountered with CalculiX for the work in this thesis are listed
below.
No damping in direct integration analysis
The lack of a damping implementation in direct integration mode in CalculiX ver-
sion 2.5 is the biggest limitation we encountered. Rayleigh damping is implemented
only for modal analysis but not for direct integration. This lack of implementation
forced us to write our own implementation, as described in section 3.3.
Documentation
Although CalculiX provides a decent reference manual, it is very limited compared
to Abaqus’ manual. To give an idea, the CalculiX manual has 522 pages, while
the multi-pdf version of the Abaqus documentation contains roughly 19,000 pages.
An elaborate manual is indispensable for a computational technique as complex
Chapter 2. FEA of elastic wave problems 24
as this, where experience, examples and practical advice are extremely valuable.
At the positive side, CalculiX has an active mailing list maintained by CalculiX’
main developer Guido Dhondt who actively replies to user’s questions.
Linear element support
There is no decent support for linear elements with reduced integration in explicit
calculations, while Abaqus/Explicit only allows these kind of elements. As men-
tioned in subsection 2.1.2, linear elements with reduced integration have problem-
atic behavior like spurious zero-energy modes, for which they should be corrected.
CalculiX does not provide such corrections and discourages the use of linear ele-
ments instead.
Contact
In subsection 3.1.4 we discuss our attempts on contact simulations in CalculiX,
which were eventually abandoned. The lack of linear element support is a major
drawback for node-to-surface contact calculations, since the equivalent nodal forces
for quadratic elements under pressure loading is varying over nodes [33, sec 12.4.5].
Furthermore, contact is not well-tested for 2D shell elements, although these could
be more computationally efficient.
Automatic increment control also known as “The NaN bug”
In 2.2.2 we discuss the computational cost role of the choice of dt and the solution
most codes (including CalculiX) provide: automatic increment control. This means
we allow CalculiX to automatically change the step size dt on every step. At the
other hand, we also requested the output to be written on a fixed number of equally
spaced timepoints. The NaN bug was caused by the situation where the variable
step size caused the simulation to end up at a time a very small distance dt∗ from
the output timepoint. Then the consecutive step size will be adjusted to this tiny
distance dt∗, causing some numerical instability and resulting in an output that
consisted of only NaN’s. The situation is sketched in figure 2.6. The results of
two important experiments have been affected by the NaN bug: the experiments
to use a Mooney-Rivlin nonlinear material and the experiments to use 2D S8 shell
elements. In later calculations, we avoided this error by using a fixed increment
size, which gives the possibility to easily write out the system’s state on a number
of equally spaced timepoints.
Output parsing
Abaqus comes with a python interface for automatic output parsing. We imple-
mented our own python parser module for these .dat files.
The ideal working case for our setup (introduced in 3.1.1) in Abaqus would be a fine
mesh with reduced integration linear 2D elements with hourglass control, using the
explicit solver, taking geometric nonlinearity into account and eventually adding material
Chapter 2. FEA of elastic wave problems 25
Ti Ti+1
dt dt dt dt dt dt∗
Time
Figure 2.6: A sketch of the situation in which the NaN bug occurs. Because of theoutput request, the increment dt∗ is forced to be too small and causes some numerical
error.
nonlinearity. With CalculiX we were forced to use quadratic 3D elements instead of linear
2D elements, increasing the number of nodes per element from 4 to 20. We included
geometric nonlinearity but the results became unstable when the loading force increased
too much (see 4.3).
In conclusion, CalculiX is a very decent tool for a number of applications in structural
mechanics, including nonlinear static analysis and modal dynamics. However for the kind
of high-frequency explicit calculations and contact calculations we aimed to perform, it
has very limiting shortcomings.
Chapter 3
Simulation details
3.1 Simulation setup
In this thesis we developed two different setups, of which only one was used for further
experiments. The first setup was an attempt to simulate a situation similar to the
tactile sensor described in subsection 1, with two separate pieces of material in dynamic
contact. The second setup is a greatly simpler physical system: it consists of only a
rectangular slab of material, with two opposite sides fixed.
In 3.1.1 we describe the slab and in 3.1.3 we discuss the importance of the loading profile.
We discuss the use of units and material properties in 3.1.5. Finally, in 3.1.4 we give an
overview of problems occurring in the contact simulations and the computational cost
involved with contact simulations.
3.1.1 Slab
Figure 3.1 shows a sketch of the slab. The setup is simply a 2D rectangular piece of
elastic “rubber”, with the left and sides clamped. The clamped sides are implemented as
rigid boundary conditions. At the place of the arrow, the load is applied with a quadratic
loading profile (see 3.1.3). The load is applied in the plane, all motion is restricted to the
2D plane. We locked the z-direction to avoid unwanted movement caused by numerical
errors. Since the load can be positive and negative, the force on the loading area can
act alternately as pushing and pulling. This will cause dilatation waves to propagate
through the material.
26
Chapter 3. Simulation details 27
F (t) = Fmaxu(t)x
yQy,top(t) = ∆ytop(x, t)
y(x, t)
Figure 3.1: A sketch of the slab setup. The arrow with F (t) indicates the center ofthe loading area, here not in the center. The top side of the slab, without boundarycondition, is used as readout side. We will further refer to this as the top side or the
readout side.
In our future experiments, the load at the bottom F is a function of t and will be
determined as the input signal u(t). The top side will serve as readout side. The input-
output behavior is described in 3.1.2. We indicated the y-displacement vector of the
selected nodes at the top of the slab as Xy,top. These displacements form a discrete
set of response functions, which can be written as ∆ytop(x, t) since they are function
of the x-position on the readout side and function of time. With a slight flexibility in
notation we will further indicate the output response functions with y(x, t), dropping
the indication ∆ and top. We only consider the displacements in the y-direction since
the z-direction is locked and the displacements in the x-direction will be relatively small.
We implemented the material with C3D20R elements. This element is a popular general-
purpose quadratic 3D element with 20 nodes and reduced integration (an introduc-
tion can be found in 2.1.2). The rectangular slab of material will thus be meshed
with brick elements with the same size in the x and y direction, which we will call
sx and sy. The element size in the z-direction will be called sz. Our final body
of experiments was performed with a mesh containing 50 × 50 × 1 elements of size
(sx, sy, sz) = (30mm, 30mm, 10mm). This thus modeled a piece of rubber of size
1.5m × 1.5m × 10mm. The material parameters and units are discussed in more de-
tail in 3.1.5. We discuss the mesh size in function of convergence more in depth in
3.4.
A possibly more efficient alternative would be to use 2D elements, for example the
quadratic S8 shell element, since it has less nodes and thus less superfluous degrees of
freedom. However, restricted to the capabilities of CalculiX, the experiments with 2D
elements proved to be problematic. Although the simulation did not exit with an error
Chapter 3. Simulation details 28
message, after a short period of time all output numbers turned to NaN’s. In retrospect
we concluded this was caused by the NaN bug (page 24), but we assumed a problem with
the S8 element and decided to continue using the C3D20R elements.
In table 3.1 we give an overview of the standard simulation parameters, that were used
for all simulations unless it is explicitly stated differently.
Table 3.1: Overview of the standard simulation parameters
Parameter Value Note
sx, sy 30 mm
sz 20 mm
Slab side L 1500 mm
Loading area width 2w 300 mm
Fmax 100 N Spread over loading area
Simulation step size dt 1× 10−5 s
Output step size dt 5× 10−5 s
Damping time τd 0.2 s See 3.3
# elements per side 50
# elements in mesh 2500
# nodes in mesh 18 003
3.1.2 Input and output
When speaking in terms of signal input and output signals of the slab system, we will
typically use the terms input u(t) for the time-dependent force that is applied to the
loading area. In practice, we typically take u(t) normalized to 1: max |u(t)| = 1. The
actual magnitude of the force is dictated by the scaling parameter Fmax which typically
takes the value of 1× 108 µN = 100 N. We write
F(t) = F(x)Fmax u(t) (3.1)
Here F(t) is the load vector containing the nodal forces as described in 2.2.1. F(x) is
the vector that determines what nodes receive the loading in the y-direction, and can be
thought of as the unity vector that distributes the force appropriately over the nodes.
It is determined by the loading profile, which is a technical issue discussed in 3.1.3.
Chapter 3. Simulation details 29
1.00 1.01 1.02 1.03 1.04 1.05
t (s)
−1.0
−0.5
0.0
0.5
1.0
u(t
)
(a) Input signal
101 102
Frequency (Hz)
−50
0
50
100
150
Am
plitu
de
(dB
)
(b) Input spectrum
1.00 1.01 1.02 1.03 1.04 1.05
t (s)
−0.06
−0.04
−0.02
0.00
0.02
0.04
0.06
y(x
0=
0,t
)(m
m)
(c) Timetrace of top central node
101 102
Frequency (Hz)
−50
0
50
100
150
Am
plitu
de
(dB
)
(d) Spectrum of top central node
−600−400−200 0 200 400 600
x (mm)
−0.06
−0.04
−0.02
0.00
0.02
0.04
0.06
y(x,t
=1
s)(m
m)
(e) Profile for t=1 s
−600−400−200 0 200 400 600
x (mm)
0.000
0.005
0.010
0.015
0.020
0.025
0.030
0.035
0.040
y RMS
(x)
(mm
)
(f) RMS profile for t integrated over [1, 1.01]
Figure 3.2: Example of input and output of slab
Figure 3.2a shows a low-passed white noise input function u(t) with as cut-off frequency
400 Hz. The amplitude of the Fourier spectrum (figure 3.2b) shows the spectral content
of the signal.
We introduced the response function y(x, t) as the time- and position dependent y-
displacements on the top side of the slab. We will commonly use two visualizations of
y(x, t): the timetrace and the profile. The first is the timetrace or node response where
x is held constant and y(x0, t) is plotted in function of time. Also the Fourier spectrum
of this timetrace will be an interesting quantity. The nodal response for x0 = 0 mm is
Chapter 3. Simulation details 30
visualized for a short period of time in figure 3.2c and the spectral amplitude in 3.2d
(here the signal was rescaled).
Here a word of caution about the interpretation of these timetraces is in place. In a
real situation, a displacement sensor would have a certain finite extent and would not
be directly mappable to one node. Therefore it could in the slab system be modeled
by taking a linear combination of the displacements of a number of adjacent nodes,
with e.g. Gaussian weights. This would however reduce the resolution with which we
can distinguish the difference between the responses on different positions, since it would
apply a smoothing of the different nodal response functions. Therefore we will not apply
this smoothing and think of our sensors as being extremely localized on exactly the place
of the mesh nodes. This should be kept in mind when appropriate.
The second visualization of y(x, t) is the profile of the top or bottom side. Here the time
is held constant and y(x, t0) is plotted as a function of the x-position on the side of the
slab. An example is plotted in figure 3.2e for t0 = 1 s.
Finally, we introduce the root mean square (RMS) profile yRMS(x) as an important
measure for readout.
yRMS(x) =
(1
∆t
∫ t0+∆t
t0
(y(x, t))2 dt
) 12
(3.2)
The integration typically runs over 10 ms or 20 ms, to capture at least one fluctuation
of a slow 100 Hz wave. yRMS(x) is always positive and can be seen as the instantaneous
amplitude of vibration of each node. An example is plotted in figure 3.2f, where the
integral bounds are t0=1 s and t1=1.01 s.
3.1.3 Loading profile
In the slab setup we want the load F (t) to work on a finite part of the bottom open
side of the slab. This means that the force will only be applied to a limited number
of elements, centered on the nodes with position (xc, 0) when seen in the xy-plane, as
in Figure 3.3. In order to have a finite extent of the loading, the simplest option to
implement is to select a number of elements to apply full loading to, and apply no
loading to the other elements (this is called block loading in figure 3.3). However, this
approach is not robust to changes in the mesh size. Due to the quadratic interpolation
functions of the elements, the desired block loading is not exact but gets smeared out
to a continuous loading profile. The length of the undesired tails is the size of one
element: the loading is weaker on the outer half element of the outer elements of the
block, and still nonzero in the adjacent half of the element just outside of the block
Chapter 3. Simulation details 31
xcxc − w xc + w
Parabolic loadingWanted block loading
Actual block loading
Fmax = sz∫ xc+wxc−w P (x) dx
P (x)
x
y
L2
Figure 3.3: Block and quadratic loading profile on the bottom side of the slab, cen-tered on the indicated node.
loading. Therefore the actual loading profile in the model will change when the mesh is
refined: when the size of the element is reduced, the tails of the continuous loading will
be reduced as well. This will undermine convergence for increasing mesh density.
To avoid this effect, we use a quadratic loading profile. This profile has the advantage of
being both continuous and having a finite extent, in contrast to e.g. a Gaussian loading
profile which needs an artificial cut-off. The quadratic loading profile with width w is
given by (3.3) and from normalization of the total applied force (3.4) we find the value
of α (3.5). The integral runs over the loading area in the xz-plane. Note that α and
P (x) have dimensions of pressure.
P (x) = α
(1− (x− xc)2
w2
)(3.3)
Fmax =
∫
AP (x, z) dA
=
∫ sz
0dz
∫ xc+w
xc−wP (x) dx
=4
3αwsz (3.4)
α =3Fmax4wsz
(3.5)
Chapter 3. Simulation details 32
On an element where the outer nodes have x-coordinates a and b, we define the element
pressure as in (3.6).
Pel =1
b− a
∫ b
aP (x) dx (3.6)
=3Fmax4wsz
[1− (b− xc)3 − (a− xc)3
3w2(b− a)
]
With this definition we see that the total force can be written as a sum of the forces
acting on each element (3.7), assuming all elements have the same width ∆x. Fmax is
the total force acting on the load surface as expected.
∑
elements
Pel ∆xsz = sz
∫ xc+w
xc−wP (x) dx (3.7)
= Fmax
3.1.4 Infeasibility of contact simulations
The original experiments we conducted in this thesis work were contact simulations.
Inspired on the biological tactile sensing as described in the introduction, see 1, we
attempted to produce vibrations from friction in contact. These vibrations should have
been either excited by stick-slip effects when the surface is smooth, or by bending and
releasing the ridges when the surface has large texture irregularities. These contact
simulations did not lead to results in this thesis for a number of technical reasons, and
more fundamentally from computational infeasibility.
First we will discuss the technical difficulties we came across which are specific to Cal-
culiX and our implementation. They are also mentioned briefly in 2.3.2. The automatic
step size control contained two bugs: the NaN bug (page 24) and the unexpected lower
limit. The NaN bug did not affect the contact calculations, but does introduce the need
to work with a fixed increment size. Since contact calculations are very nonlinear, they
require a very small (fixed) increment to avoid instability.
The lower limit on incrementation size dt is a threshold one gives to CalculiX, which
causes the calculation to stop when the automatic step size control decreased dt below
this specific threshold. In CalculiX it should be possible to specify this lower bound in
the input file, but an apparent bug caused calculations to exit when dt was decreased
below 4× 10−7 s, even when the lower bound was specified at 1× 10−15 s.
Furthermore, contact calculations in CalculiX are only tested and recommended for
quadratic elements. This is in contrast with general advice from e.g. the Abaqus man-
ual [33], which recommends using only linear elements for contact calculations. The
Chapter 3. Simulation details 33
lack of linear element support may further contribute to the lack of convergence of the
simulations and the need for an extremely small increment size.
Finally we will discuss the infeasibility of contact simulations. When we assume the
technical errors mentioned above would be solved, we can make an estimation of what
simulation time is needed. Table 3.2 shows a comparison of system size, calculation
time per step, step size and number of increments. The left column is based on the
fem calculations from our later work. The right column contains estimations for these
numbers when working with a setup including contact. We assume that this setup needs
to be more geometrically complex, for example to model a skin structure with ridges,
and therefore needs to contain more elements than the simple slab setup.
Table 3.2: Comparison of computation time for the slab simulations (observed) andthe contact simulations (estimated).
Slab simulations Contact simulations
Number of elements 18,000 100,000Computation time per increment 0.16 s 0.89 s
Simulation length 2.0 s 0.5 sIncrement/step size dt 1× 10−5 s 1× 10−7 sNumber of increments 200,000 5,000,000
Total computation time 32,000 s 4.45× 106 s8.9 hours 1236 hours
51.5 days
The numbers in the right column are an estimation. The computation time per incre-
ment is estimated from the linear relationship of computation time of a single explicit
increment with number of degrees of freedom. The step size is estimated to average
1 × 10−7 seconds when strong contact is incorporated, based on the automatic incre-
ment adaptation in our diverging simulations. The simulation time is a low estimate,
based on the physical measurements with the BioTac (1) and the minimum simulation
time it takes in our simulations to observe waves to be generated and damped out. We
see that the estimated total computation time is limitingly large. This was the main
reason to abandon the contact simulation path.
3.1.5 Units and materials
In the slab description 3.1.1 we mentioned that the material is rubber and mentioned its
physical dimensions. In fact this is an arbitrary choice, based on physical intuition and
the fact that in the BioTac (see introduction) the skin is made of silicon. However, this
arbitrary choice of material should not be seen as something of importance, since the
very same simulations could model a much larger piece of material with higher elastic
Chapter 3. Simulation details 34
modulus. What is relevant are the ratio of the length of the slab versus the speed of
dilatational waves, v =√
Yρ ≈ 233× 103 mm/s.
This is because the choice of units in finite element analysis is arbitrary. In pure me-
chanical calculations (without thermal component) there are three units that can be
chosen arbitrarily, from which the others will be derived. Table 3.3 shows the SI and
FEA units we used in our setup.
Table 3.3: SI units and the corresponding FEA units.
Quantity SI Unit FEA Unit
Time s s
Distance m 10−3 m = mm
Mass kg 10−3 kg = g
Force N = kg ms2
g mms2
= µN
Density kgm3
gmm3 = 106 kg
m3
Elastic modulus, pressure Pa = kgm s2
gmm s2
= Pa
The material properties for rubbers are highly varying and should preferentially be fitted
to tensile strength experiments. Since our calculations do not have the goal of mimicking
an existing physical system, we have not done such experiments. Instead we based the
material properties on the range of values found for silicone rubber in the Material
Property Database 1. The material properties are listed below in table 3.4 in SI units
and in the FEA units. Note that a different choice of material properties has exactly the
same effect as redefining the units thus choosing different spatial and time dimensions,
which are chosen arbitrarily in the first place.
Table 3.4: The material properties in SI units and FEA units.
Quantity SI Unit FEA Unit
Density ρ 920 kgm3 0.920× 10−3 g
mm3
Elastic modulus Y 50MPa 50MPa
Poisson ratio ν (dimensionless) 0.45 0.45
Speed of waves v 233 ms 233× 103 mm
s
Typical total loading force 100 N 1× 108µN
Finally, we also implemented the rubber with Mooney-Rivlin material nonlinearity. Cal-
culiX provides a number of isotropic nonlinear material models like Arruda-Boyce and
1http://www.matweb.com/search/datasheet.aspx?MatGUID=cbe7a469897a47eda563816c86a73520
Chapter 3. Simulation details 35
Mooney-Rivlin which are activated with the *HYPERELASTIC keyword in the material
definition. In the first experiments, this material seemed to cause numerical instability
in CalculiX, although later this unexpected behavior was identified as the NaN bug, see
2.3.2. Therefore we decided to continue by modeling the material as a linear elastic
material with geometrical nonlinearity (these attempts are described in 4.3). In future
work, implementing material nonlinearity is a promising direction to extend the work
presented in this thesis.
3.1.6 Input signals
0.000 0.005 0.010 0.015 0.020 0.025 0.030
t (s)
−1.0
−0.5
0.0
0.5
1.0
uW
(t)
100 101 102 103
f (Hz)
−100
−80
−60
−40
−20
0
Am
pli
tud
e(d
B)
Figure 3.4: Wiener noise with τW = 0.001 s thus cut-off frequency f ≈ 160 Hz. Thebreak frequency is indicated with a vertical line in the frequency plot.
We introduce a type if input signal we will use often in this text: Wiener noise. This is
the kind of noise generated by a Wiener process. We generated it by leaky integration
of uniform white noise:
u(n+ 1) = (1− α)u(n) + αr(n+ 1) (3.8)
where r(n+ 1) is a Gaussian random variable.
This corresponds to the Euler method solution of the differential equation
dx(t)
dt= −x(t)
τW+ r(t) (3.9)
which is a first order low-pass filter with cut-off frequency ω = 1/τW or f = 12πτW
. The
relation is α = dtτW
. A plot of Wiener noise and its spectrum is shown in figure 3.4
This noise is a useful approximation of pure white noise in continuous time. We will
often use it as an input noise for testing and as additive noise in the classification tasks.
Chapter 3. Simulation details 36
3.2 Workflow of python analysis framework
Parser
fembatch object
Fixed parametersVariable parameters
p0, p1, ...pN−1
setup 0setup 1
...
setup N-1
pyFormex
Pool of subprocesses
Data analysis
Plotting
CalculiXManage
*.inp
*.dat
Figure 3.5: Workflow of the python analysis framework.
Figure 3.5 sketches the workflow of the python analysis framework we developed in this
thesis. The right hand column shows the order of the actions in the python analysis
framework. The fembatch object is used for defining one batch of (typically 8 to 20)
different calculations. It contains the N setup objects, which contain all information
about an individual setup. The setups of one batch typically have all parameters in
common but one parameter is varying (e.g. the damping time constant or mesh density).
By calling the setup’s write_abq() function, the pyFormex model is generated and used
to create the *.inp input files for Abaqus or CalculiX. The simulations are then either
started locally, on the ResLab cluster or on the UGent HPC. They are managed by
the python script that launches C subprocesses in parallel with a multiprocessing pool,
where C is the number of threads on the node. The *.dat files are parsed and the
parsed data is both pickled on disk and saved as a list of numpy arrays in the fembatch’s
variable raw_data. This data is then available for further data analysis and plotting. The
fembatch objects has some convenient functions for plotting comparisons of timetraces
of the displacements of single nodes or displacement profiles over all nodes.
Chapter 3. Simulation details 37
3.3 Implementation of material damping
In the introduction 1 we discussed the properties we expect a physically realistic material
needs to have in order to do information processing. Damping is an important aspect
in order to have a fading memory. Damping is typically implemented in finite element
analysis package by the Rayleigh damping model (2.23). This damping model is an
approximation of the complex viscoelastic effects that physically cause the damping but
are both impossible and uninteresting to model exactly.
CalculiX, the tool we used for solving the finite element equations, does not yet provide
a possibility to include material damping in direct integration simulations. CalculiX
does provide Rayleigh damping for modal dynamics however [29, p. 306]. Nevertheless
we implemented material damping by implementing it into CalculiX, which is open
source. We did not implement Rayleigh damping but a simpler, ad-hoc damping model
we describe in 3.3.1, equation (3.14). In our damping model we reduce the velocities in
each increment by multiplying by a factor (1 − β). To have a physical interpretation
of the damping, we link the damping constant β with the energy decay time constant
τd in 3.3.2. Finally, in 3.3.3 we discuss the experiments to confirm the validity of our
approach.
3.3.1 Implementation
In 2.2.1 we introduced the Rayleigh damping matrix as a linear combination of the mass
and stiffness matrices (2.23). This approach is not available in CalculiX version 2.5. The
solution we implemented is to multiply the velocities on each increment with a factor
1−β, where β is a dimensionless parameter and is typically small, e.g. β = 1×10−4. This
avoided the complexity of composing the Rayleigh damping matrix on every increment.
The adaptations of CalculiX are in the file nonlingeo.c.
Rayleigh damping
To compare the effect of this β damping with the effect of a damping matrix, we first
analyze the role of Rayleigh damping on the discretized equations. The explicit time in-
tegration process is given in (2.24) to (2.24), and more specific to the velocity integration
step (2.25).
Xt+dt/2 = Xt−dt/2 + dtXt (3.10)
Chapter 3. Simulation details 38
Combining this with the expression for the acceleration (2.24) and the expression for
internal stresses It including Rayleigh damping, we obtain
Xt+dt/2 = Xt−dt/2 + dtM−1(Ft −KXt︸ ︷︷ ︸=undamped X∗
t+dt/2
−CXt−dt/2) (3.11)
= X∗t+dt/2 − dtM−1(c1M + c2K)Xt−dt/2 (3.12)
where we identify the first term of the right hand side as the undamped new velocity,
which we call X∗t+dt/2. The second term is the velocity damping term in case of Rayleigh
damping:
γC = dt(c1 + c2M−1K)Xt−dt/2 (3.13)
We see that this velocity damping term is proportional to the velocity evaluated on time
t− dt/2.
β damping
We compare this with our β damping implementation. Here we calculate the internal
stresses It without damping, but add it externally by multiplication with (1− β):
Xt+dt/2 = (1− β)(Xt−dt/2 + dtXt) (3.14)
= X∗t+dt/2 − βX∗t+dt/2 (3.15)
= X∗t+dt/2 −β
1− β Xt+dt/2 (3.16)
Thus we can conclude that the velocity damping term in case of β-damping is
γβ =β
1− β Xt+dt/2 (3.17)
which is similar to the Rayleigh damping term (3.13) when the damping matrix is purely
proportional to the mass matrix M, thus c2 = 0. The only remaining difference is that
in (3.17) the damping term is proportional to the velocity on time t+ dt/2 instead of at
t− dt/2.
3.3.2 Damping time constant of impulse response
To have a physical interpretation of the damping factor β, we link the damping constant
β with the energy decay time constant τd in this section and in 3.3.3. We will use the
Chapter 3. Simulation details 39
slab experiment with a very short loading impulse
u(t) =
1 if t < 5× 10−3 s
0 if 5× 10−3 s < t < 5 s(3.18)
We read out the impulse response at the central top node: y(x = 0, t), which we will
refer to in this section as y(t). We expect this response to be fluctuating unpredictably
due to many high-frequency waves, but to have a damped envelope due to damping. We
call this envelope y(t) and make the exponential decay ansatz
y(t) = y(0) exp
(− t
τd
)(3.19)
with τd the experimental envelope decay parameter. This assumption is confirmed in
the experiments, see figure 3.6b.
Velocity damping τd,v
First we examine how the velocities in the system are damped due to the β-damping.
To obtain an estimation of the velocity damping we consider a volume moving as a
rigid body with an initial velocity, without influence of external forces. In a physically
realistic simulation the velocity will not decrease in this situation, however due to our
damping implementation the velocity will decrease exponentially. Although this unreal-
istic behavior would be a problem if there was a macroscopic movement of the material,
in our experiments the boundaries of the material are fixed. Therefore no macroscopic
movement of the slab can occur, thus the time-averaged velocity of each node will be
zero: 〈X〉T ≈ 0.
In the following equations (3.20-3.24) we label the increments as i = 0 . . . NT . From the
definition of the velocity damping factor we see that the velocity of the free movement
evolves as vi+1 = (1 − β)vi, thus exponentially decaying. We express this exponential
relation in
v(ti) = v0 exp
(− tiτd,v
)(3.20)
We want to relate the β factor with the exponential damping time constant τd,v. Let
Nd be the number of increments according with t = τd,v then Nd =τd,vdt . We see from
Chapter 3. Simulation details 40
(3.20) that
v(τd,v) = v0 exp(−1) = v0(1− β)Nd (3.21)
Nd ln(1− β) = −1 (3.22)
τd,v = − dt
ln(1− β)(3.23)
From Taylor expansion of the denominator in (3.23), we obtain the very simple relation
between the β damping constant, the simulation increment dt and the velocity decrease
time constant τd,v.
τd,v =dt
β(3.24)
Although quite a strong assumption of uniform velocity is made for this derivation,
the result (3.24) will proof to be in very good agreement with the experiments, see
subsection 3.3.3. Therefore we use relation (3.24) as an approximation in situations
with non-uniform velocity, as is the case with propagating waves in the slab.
Energy damping
In the slab system under impulse loading, we can relate the velocity damping to the
energy damping and to the time constant of the exponential envelope of y(t). We call
this envelope y(t). First we make the assumption that in the system, the time-averaged
potential energy equals the time-averaged kinetic energy: 〈Ekin〉 = 〈Epot〉. This is a
general property of mechanical systems where the potential energy between two particles,
here nodes, goes quadratically with their distance as in the harmonic oscillator and is
proven in the Virial theorem. Then the total energy is 〈Etot〉 = 〈Ekin〉+〈Epot〉 = 2〈Ekin〉,and can thus be related to the velocity over time
〈Etot〉 = 2〈Ekin〉 (3.25)
= m〈v2〉 (3.26)
= mv20 exp
(−2
t
τd,v
)(3.27)
Note that we assume an appropriate time window for averaging that is long enough with
respect to the fast vibrations and short with respect to the damping time constant τd,v.
We now make the plausible assumption that the total energy in the material at a certain
time point is related to the envelope y(t) of the vibrations with an exponent λ. This
is for example the case in the harmonic oscillator (〈Etot〉 ∼ x2max) and the vibrating
Chapter 3. Simulation details 41
cantilever beam (〈Etot〉 ∼ x4max [35])
〈Etot〉 ∼ (y(t))λ (3.28)
∼ (y(0))λ exp
(−λ t
τd
)(3.29)
(3.30)
We also know from 3.27 that
〈Etot〉 ∼ exp
(−2
t
τd,v
)(3.31)
Combining this we can link the experimental envelope time constant τd with the the-
oretical velocity damping time constant τd,v through the exponent λ which has to be
determined experimentally.
τd =λ dt
2 β(3.32)
3.3.3 Damping experiments
The impulse response experiment is to apply a short load in the middle of the bottom
free side of the slab, as described in subsection 3.3.2. This impulse loading can be seen
as hitting the slab in the middle and watching the resulting vibrations in the slab at
the top readout side. Figure 3.6 shows a linear and a logarithmic plot of the y-directed
displacement at the central node. For our experiments, the mesh size was 40mm,
the slab side had length L = 1600mm and the load was applied to an area of width
2w = 240mm.
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
Time (s)
10−3
10−2
10−1
100
Dis
pla
cem
ent
(mm
)
y(t)
y(t)
(a) Logaritmic y-scale
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Time (s)
−0.8
−0.6
−0.4
−0.2
0.0
0.2
0.4
0.6
0.8
Dis
pla
cem
ent
(mm
)
y(t)
y(t)
(b) Linear y-scale
Figure 3.6: Displacement in the y-direction at the central top node for a slab beingloaded with a short load in the center (a short hit). The material has damping constant
β = 1× 10−4 and dt = 1.56× 10−5 s. The envelope is a fit of y(t) = y(0) exp(− tτd
)
with fitted values y(0) = 0.61 mm and τd = 0.31 s
Chapter 3. Simulation details 42
10−6 10−5 10−4 10−3
β
10−2
10−1
100
101
102
τ d(s
)
τd from fit
τd = 2dtβ
Figure 3.7: Displacement damping time constant τd versus β in log-log scale. Thefitted relation is τd = 2.069dtβ−0.996 and corresponds perfectly to τd = λ dt
2 β with λ = 4
We see in figure 3.6 the response of the bottom top node after the hit. In this ex-
periment the damping parameter β = 1 × 10−4 and increment dt = 1.56 × 10−5 s.
The y-displacement in function of time is an irregular signal, but nevertheless the
maximal displacement follows a clear exponential decay, as expected. To fit the en-
velope, y(t) is binned and the maximum of the absolute values of each bin is used to
fit y(t) = y(0) exp(− tτd
). For this experiment the values are y(0) = 0.61 mm and
τd = 0.31 s.
By performing the same simulation for different damping parameters β we find experi-
mentally the relation between the envelope decay parameter τd and β (see figure 3.7)
τd = 2.069 dt β−0.996 (3.33)
≈ 2 dt
β(3.34)
Which corresponds to (3.32) with an exponent λ = 4. This means the damping ex-
periments validate relationship (3.32) and give an estimation λ = 4 for the energy-
displacement relation 〈Etot〉 ∼ (y(t))λ.
3.4 Convergence experiments
The finite element method is a numerical method used to solve complex problems. There-
fore it is crucial to assess the reliability of the results. As we mentioned in 2.1.1, in-
creasing the mesh density will increase the solution quality, theoretically converging to
the true solution when the mesh elements are infinitesimally small and the integrals are
approximated exactly. At the other hand, we want the discretization to be as coarse
Chapter 3. Simulation details 43
as possible to reduce the computation cost. In assessing the convergence, the approach
is to calculate the behavior of the same physical system, discretized in meshes with
decreasing mesh size. The solution from the calculation with the smallest mesh size is
used as a reference for comparison of the other calculations.
In 3.4.1 we discuss the error measures introduced to estimate the convergence: the time-
dependent moving error ε(t) and its time-average ε. We also explain what calculation
we use to obtain these error measurements in the next subsections. The mean error ε in
function of mesh size for the slab setup is discussed in 3.4.2. The moving error ε(t) for
our setup is discussed in 3.4.3.
3.4.1 Convergence experiments and measures
To measure the convergence of the calculations, we use our slab setup as we described in
section 3.1.1. The type of calculation is, like in all further experiments, an explicit direct
calculation with geometrical nonlinearity. The setup includes the material damping as
described in 3.3 with a damping time constant τd = 0.2 s. We will use the top free side
of the slab as the readout side and indicate the y-displacements as y(x, t) as usual.
The input force signal u(t) determines the behavior and displacements in the slab. Since
we will want to estimate how the error evolves over time, it is important to have a
constant amount of energy in the input. This eliminates the possibility to use an impulse
as input. At the other hand, we do not want to measure the response to one specific
frequency, therefore noise is preferred. Finally, the system is required to behave smoothly
in a reasonable range of frequencies. Therefore we use low-passed white noise with a
maximum frequency of 400 Hz. Of course the noise signal is the same for all setups
in the convergence experiment. An example of this input signal and the displacement
response of the central node of the readout side is shown in figure 3.8.
In the literature two types of error measures are described: simple displacement-based
error measures and advanced error estimators based on the energy in the system to avoid
problems with singularities in case of point loading [27, p. 365]. These energy-based
estimators are however based on the static equations, while we use dynamic equations.
Since we do not need to worry about singularities, we introduce a simple error estimate
based on the root mean square (RMS) of the error on the displacements y(x, t) at the
readout side. We define the moving error,similar to [27] for the static case, as
ε∗(t) = ‖y(x, t)− y(x, t)‖L2 (3.35)
=
(1
L
∫ L/2
L/2(y(x, t)− y(x, t))2 dx
) 12
(3.36)
Chapter 3. Simulation details 44
0.0 0.5 1.0 1.5 2.0
Time (s)
−1.0
−0.5
0.0
0.5
1.0
u(t
)(µ
N)
0.0 0.5 1.0 1.5 2.0
Time (s)
−0.10
−0.05
0.00
0.05
0.10
y(x
=0,t
)(m
m)
1.26 1.28 1.30 1.32 1.34 1.36
Time (s)
−1.0
−0.8
−0.6
−0.4
−0.20.0
0.2
0.4
0.6
0.8
u(t
)(µ
N)
1.26 1.28 1.30 1.32 1.34 1.36
Time (s)
−0.10
−0.05
0.00
0.05
0.10
y(x
=0,t
)(m
m) y, sx = 18.75
y, sx = 30.00
Figure 3.8: The input signal and a timetrace of the response of the central readoutnode (at x = 0, y = L). The two top plots are the full simulation, with indicationof the warm-up time t0 = 0.5 s in red. The bottom two plots are a close-up of theportion indicated with the green markers. It has the reference solution in red and a
less accurate solution (sx = 30) in blue.
Chapter 3. Simulation details 45
Where the indication after the norm means taking the L2 norm by integration over x
like in (3.36). This moving error shows the evolution of the error in function of time.
To define a global error for one simulation, we just average ε∗(t) over the total simulation
time:
ε∗ = 〈‖y(x, t)− y(x, t)‖L2〉 (3.37)
=1
T − t0
∫ T
t0
‖y(x, t)− y(x, t)‖L2 dt (3.38)
Here we take t0 the warm-up time and take only the later samples into account. In
figure 3.8 this warm-up time is indicated with the red marker. Our simulations have a
total time T = 2.0 s and we discarded the solution before warm-up time t0 = 0.5 s to
calculate the mean error ε.
To interpret the results more easily we normalize these error measures by the time-
averaged RMS of the readout side y-displacements of the reference solution, given by
η = 〈‖y(x, t)‖L2〉 (3.39)
This will allow us to define the relative error measures ε(t) and ε that can be interpreted
as a fractional error.
We introduce the relative moving error
ε(t) =ε∗(t)
η(3.40)
=‖y(x, t)− y(x, t)‖L2
〈‖y(x, t)‖L2〉(3.41)
(3.42)
and the relative mean error
ε =ε∗
η(3.43)
=〈‖y(x, t)− y(x, t)‖L2〉〈‖y(x, t)‖L2〉
(3.44)
A first technical aspect that needs attention is the approximation of the integral. To
calculate the integral of the difference y− y, we approximate the integral by a sum over
the nodes on the top side. Therefore we need a uniform sampling distance of y(x, t),
thus a uniform mesh size. It is clear that this will not be the case when comparing
calculations with different mesh sizes, as pictured in figure 3.9. Therefore we upsampled
Chapter 3. Simulation details 46
−200 −100 0 100 200 300
x (mm)
−0.10
−0.08
−0.06
−0.04
−0.02
0.00
0.02
0.04
y-d
isp
lace
men
t(m
m)
y, sx = 18.75
y, sx = 30.00
Figure 3.9: A part of the displacement profile at some timepoint (t = 1.25 s) fortwo simulations with a different mesh size. The y(x, t) approximate solution needsupsampling to match the mesh density of the y(x, t) solution in order to approximate
the integral with a sum.
the solutions in the x-direction to match the number of sample points of the reference
solution. The function scipy.signal.resample uses the Fourier method in order to
obtain the best upsampling or downsampling quality [36]. We confirmed that resampling
to different mesh sizes did affect the accuracy of the results with less than 1 %.
A second fundamental issue with our error measure is the fact that the correct reference
solution is actually unknown. This is in contrast to the situation where a model of a
very simple setup can be confirmed against an exact analytical solution, for example
a static force deflecting a beam. In our situation, we want to confirm the damped
dynamical setup with geometrical nonlinearity so there is no hope to compare with an
exact analytical solution, even for a very simple signal. We solve this situation by taking
the calculation with the finest mesh as reference solution. It is important to realize that
in this case, the absolute value of ε has little meaning, since the choice of the mesh size
of the reference solution will determine the shift of ε. Therefore it is important to look
at the derivative of the error, as we describe in the next part 3.4.2.
3.4.2 Mean error in function of mesh size
To determine a good mesh size for our slab setup that achieves a good balance between
being physically correct and feasible in terms of computational cost, we used a batch
of experiments with a 400 Hz low-passed white noise signal as input. We used a slab
of size L = 1500 mm and discretized it in elements with varying edge size sx. The
Chapter 3. Simulation details 47
corresponding number of elements in the mesh and the computation time are found in
table 3.5.
# Elems # Nodes sx (mm) Mean err. ε Comp. time
80× 80 45603 18.75 0.000 45:35:0770× 70 35003 21.43 0.041 35:33:5160× 60 25803 25.00 0.095 24:33:0250× 50 18003 30.00 0.176 17:14:5040× 40 11603 37.50 0.299 10:00:4230× 30 6603 50.00 0.429 4:57:44
Table 3.5: Convergence experiments according to number of elements.
0 10000 20000 30000 40000 50000
Number of nodes N
−0.1
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Mea
nre
lati
veer
ror
Working point, sx = 30
Data ε
Fit ε
Baseline -0.0288
Figure 3.10: Mean relative error ε for increasing number of nodes. The fit shows the
fitted exponential ε = −0.0288 + 0.72 exp(− N
14.5×103
)
Figure 3.10 gives ε of these calculations as a function of number of nodes (similar as in
[27, p. 410]) with an exponential fit. As mentioned in 3.4.1 this information has to be
handled with care: obviously the densest mesh solution has a zero mean error ε since it
served as the reference solution. As expected the deviation from this reference solution
is decreasing monotonically with increasing mesh density. We fitted the exponential
ε = a+b exp(−N
c
)to this decreasing error (this is ad-hoc and not theoretically founded).
We found the values a = −0.02880, b = 0.72062 and c = 14576. This exponential
quantizes the decrease of the error and the value a gives the offset of the exponential.
This offset value can serve as an estimation of the error offset caused by the lack of
exact solution, i.e. as an estimation of the relative mean error of the finest solution with
64,000 nodes. This offset is 2.88%, which is reasonably small.
As indicated in 3.10, the working point for the further simulations in this thesis is a
slab size of 1500 mm × 1500 mm (thus L = 1500 mm) with brick elements of size
Chapter 3. Simulation details 48
sx× sy× sz = 30 mm× 30 mm× 20 mm. The simulations of this working point have a
quite high relative error. This is out of the practical consideration that the computational
cost becomes unbearably high when execute longer simulations. To justify the choice
of this relatively coarse mesh and high error, an important aspect is not captured in
this picture: the evolution over time of the error. This evolution is given by ε(t) and
is important to estimate whether the error stays bounded for longer simulation times.
This is discussed in the next subsection.
3.4.3 Time evolution of error
0.0 0.5 1.0 1.5 2.0
Time (s)
0.0
0.1
0.2
0.3
0.4
0.5
ε(t)
sx = sy =37.5
sx = sy =30.0
sx = sy =21.4
Figure 3.11: Relative moving error ε(t) for three experiments with a different meshsize. For the plot, this solution has been smoothed with a hamming window of size
0.01 s. It is clear that, after the warm-up time t0 = 0.5 s there is no drift of ε(t).
In this work we start from a different point of view compared to most mechanical FEA
problems. We are not interested in exact displacement magnitudes or accurate estima-
tions of maximum stresses to find the maximal loading of a structure. More important is
to capture the qualitative behavior of the dynamical system over the relevant timescale,
a couple of times the damping time constant τd = 1.0 s. Therefore we accept this
relatively large error, about 20% in the relative mean error ε.
A very important aspect to justify this choice however, is how the error evolves in time.
If ε(t) grows exponentially with time, the simulation is unstable and the result of a
calculation with a too coarse mesh can not be used. In figure 3.11 however, we see how
Chapter 3. Simulation details 49
the error reaches a plateau in function of time and thus even for long simulations the
error does not grow out of bounds. This means that the solution with sx = 30.0 mm stays
as close to the correct solution after a long simulation time, as it was in the beginning.
Without this condition, a longer simulation would need to be carried out with a finer
mesh to obtain the same accuracy at the end of the simulation. An essential element to
obtain this stability is the material damping, which dissipates energy from the system
and exponentially reduces the influence of the past input when time progresses.
Chapter 4
Slab experiments
In chapter 2 we introduced the finite element method for solid mechanics (2.1), extended
the formulation to time-dependent problems (2.2) and introduced pyFormex and Cal-
culiX as the tools we used (2.3). Chapter 3 introduces the setup we study in this work: a
2D rectangular slab with in-plane loading (3.1). The workflow of our script is discussed
in 3.2 and the CalculiX damping implementation is presented in 3.3. In 3.4 we assessed
the convergence of the setup and concluded to use a slab size of 1500 mm × 1500 mm
(thus L = 1500 mm) with brick elements of size sx×sy×sz = 30 mm×30 mm×20 mm.
In this chapter we will describe experiments we performed to understand the behavior of
the slab system. In section 4.1 we discuss the eigenmodes and eigenfrequencies. Section
4.2 is concerned with the dynamic simulations including damping and geometrical non-
linearity where the input signal is a pure sine wave of varying frequency. We examine
the steady state profiles that are built up on the top readout side. In section 4.3 we dis-
cuss the effect of increasing force amplitude in attempting to see the effect of geometric
nonlinearity. We conclude it renders the simulations instable. Finally, in section 4.4 we
stay in the low-force regime and confirm the linearity properties of the simulations.
In the next chapter 5 we further develop the analysis of the slab as a linear system.
4.1 Eigenmode analysis
We introduced the concept of eigenmodes and eigenfrequencies in 2.2.3. Eigenmodes or
natural modes are the shapes of vibrations that can occur freely in the undamped system,
defined by its geometry, material properties and boundary conditions. Each eigenmode
has an associated eigenfrequency of vibration. The corresponding displacement vector
50
Chapter 4. Slab experiments 51
(a) 42.67 Hz (b) 82.95 Hz (c) 84.94 Hz (d) 98.02 Hz
Figure 4.1: The y-displacement color profile of the four first eigenmodes with theireigenfrequencies.
0 10 20 30 40 50
Eigenfrequency
0
50
100
150
200
250
300
350
Fre
qu
ency
[Hz]
Figure 4.2: The first 50 eigenfrequencies of the slab system.
is
X(x, y, t) = φ(x, y) eiωt (2.28, rev)
It is important to realize that in 2.28 the total solution is written as a product of
the eigenmode and the time-dependent part oscillating with the eigenfrequency. The
eigenmode is only function of the position and corresponds to a picture of the vibration
when the displacement is maximal.
The eigenmodes and eigenfrequencies can be found from the finite element description
of the structure by solving the generalized eigenvalue equation
Kφ = ω2Mφ (2.29, rev)
Figures 4.1 show the first four eigenmodes in a colorplot. The first eigenmode has a
scaling where red means φ(x) = 0 and purple means maximal displacement. The other
eigenmodes have a scaling where green means y = 0 and blue and red are opposite
displacements.
Chapter 4. Slab experiments 52
Figure 4.2 and table 4.1 list and show the first eigenfrequencies. Notice that the gap
between the first and the second eigenfrequency is large (40.28 Hz), after that the fre-
quencies are rather closely spaced (typically between 0 and 10 Hz difference).
Table 4.1: Table with the first 60 eigenfrequencies.
Mode f (Hz) Mode f (Hz) Mode f (Hz) Mode f (Hz)
1 42.67 16 192.93 31 274.40 46 331.612 82.95 17 209.54 32 279.00 47 336.613 84.94 18 214.72 33 288.65 48 338.804 98.02 19 217.41 34 289.79 49 340.615 114.29 20 218.42 35 291.37 50 346.156 117.73 21 220.20 36 292.47 51 349.807 129.35 22 228.46 37 299.46 52 353.368 146.09 23 232.48 38 304.06 53 358.259 149.91 24 238.05 39 309.53 54 361.99
10 159.64 25 248.22 40 311.00 55 367.3411 163.39 26 254.36 41 319.94 56 368.9312 169.48 27 256.30 42 323.17 57 372.0913 172.73 28 262.80 43 327.71 58 374.4814 186.97 29 266.10 44 328.22 59 376.5315 188.06 30 272.28 45 330.62 60 379.81
From equation 2.29 we will make clear that the eigenmodes of a system will have the same
symmetry as the system. In the case of our slab system, the system has mirror symmetry
over the x-axis and y-axis. This implies all eigenmodes will be either symmetric or
antisymmetric with respect to reflection around the xz-plane and yz-plane.
We prove this property by considering the symmetry operator χ which performs one
of the operations which conserves the geometry and boundary conditions of the system
(thus here the two reflections). By applying this operator on the generalized eigenvalue
equation 2.29, we obtain
χ(Kφ) = χ(ω2Mφ) (4.1)
Kφ∗ = ω2Mφ∗ (4.2)
where we immediately used the invariance of K and M for the transformation, and
introduced φ∗ as the transformation of the eigenmode vector φ. Now from 2.29 it
follows that φ∗ is in the same eigenvector space as φ, which implies (assuming the
eigenfrequencies are non-degenerate)
φ∗ = αφ (4.3)
If we now take into account that scaling has to be conserved by any symmetry trans-
formation, we realize that either α = 1 or α = −1. This implies that any symmetry
Chapter 4. Slab experiments 53
transformation that conserves the system will either leave the eigenmode unchanged
(=symmetric eigenmode) or invert it (=antisymmetric eigenmode).
This means that an asymmetric driving force will never excite only one eigenmode, even
when driving with its eigenfrequency. One always needs multiple eigenmodes to con-
struct a solution that does not have the symmetry of the system. As we will see, systems
driven asymmetrically with an eigenfrequency will tend to have an almost symmetrical
profile.
4.2 Steady state profiles
In the previous section 4.1 we discussed the eigenmodes, which are the shapes of vibra-
tions that can occur freely in the undamped system. These eigenmodes can be excited
by a sinusoidal force with exactly the eigenfrequency and the same symmetry as the
eigenmode. In this section we will examine the properties of the system under load-
ing with sinusoidal signals of different frequencies. The system is damped and contains
geometric nonlinearity, and the loading is applied asymmetrically, xc 6= 0. In 4.2.1 we
explore how the equilibrium is settled, and how we will plot the steady state profiles. In
4.2.2 we discuss the different displacement profiles in function of frequency. Subsection
4.2.3 is concerned with the influence of damping on these profiles.
4.2.1 Analyzing steady state
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Time (s)
−0.04
−0.03
−0.02
−0.01
0.00
0.01
0.02
0.03
0.04
Dis
pla
cem
ent
(mm
)
Figure 4.3: The response of the top central node to a sinusoidal load with a frequencyf = 155 Hz. The damping time constant is τd = 0.2 s. The response is first irregular
but settles to the regime in a couple of damping times.
When applying a sinusoidal varying force to a system in equilibrium, we expect all nodes
to vibrate with this same frequency after a certain transient period. This transient is
Chapter 4. Slab experiments 54
the time between the system in rest and the system in its final steady state. Figure 4.3
shows the timetrace of a single node, displaying a transient but settling to the regime
in about 4 times τd.
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.02
0.04
0.06
0.08
0.10
0.12
y RMS
(mm
)
0.02 s
0.10 s
0.18 s
0.35 s
0.40 s
St. state
Figure 4.4: RMS profiles of response y(x, t) where u(t) = sin(2πft) with f = 252 Hz.The time integration goes from indicated time t0 . . . t0 + 0.01 s. The profile builds upfrom completely flat to a pattern that characterizes the steady state of this frequency.
The pattern can be compared to a standing wave on a 1D string.
The RMS profiles, introduced in 3.1.2, will be a useful visualization to examine the
vibrations occurring at the top side of the slab. The RMS profile is calculated by (3.2)
and can be interpreted as the root of the energy of the signal on position x.
yRMS(x) =
(1
t1 − t0
∫ t1
t0
(y(x, t))2 dt
) 12
(3.2, rev)
Figure 4.4 shows how the steady state RMS profile is built up when a signal u(t) =
sin(2πft) with f = 252 Hz is applied. The plot shows how the RMS profile looks like
at different times. It is clear that the RMS profile gets closer to the steady state profile
when time progresses. After about 2.5τd, thus t = 0.5 s, the dynamic steady state is
settled and the RMS profile does change only very little anymore. This corresponds
with the observation of figure 4.3.
Chapter 4. Slab experiments 55
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
y RMS
(mm
)
155 Hz 252 Hz 296 Hz 355 Hz
Figure 4.5: Top RMS profiles for different frequencies.
4.2.2 Frequency dependence
Figure 4.5 shows the top RMS steady state profiles for different frequencies. The time
integrations goes from 1.9 s to 2.0 s.
It is clear that the higher frequencies cause a yRMS pattern with more nodes and crests.
This is in agreement with the physical observation that higher frequencies cause waves
with shorter wavelengths. Therefore the “standing waves” that are formed will have a
shorter wavelength as well. It can easily be understood from analogy with the standing
waves on a guitar string, where the frequency raises linearly with the number of nodes.
It is an important observation that every frequency maps to a specific pattern on the
top side. This will be a key insight to what the elastic system does to input signals:
it transforms them to a distinguishable spatial pattern. This theme will be much more
elaborated in 5.3, where we will calculate the transfer function and use it to visualize
the patterns for different frequencies. We will also establish the eigenfrequencies as
frequencies the system is very responsive to, thus where the RMS profile shows large
displacements.
4.2.3 Influence of damping
Figure 4.6 shows the results of a set of calculations with different damping factors,
with f=296 Hz, which is in between two eigenfrequencies (292.5 Hz and 299.5 Hz). It
is interesting to notice the influence of the decreasing damping (larger τd): the yRMS
pattern gets more pronounced and the maxima become higher. The damping appears to
Chapter 4. Slab experiments 56
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.02
0.04
0.06
0.08
0.10
0.12y R
MS
(mm
)τd =0.0400 s
τd =0.0894 s
τd =0.2000 s
τd =0.4470 s
τd =1.0000 s
Figure 4.6: Top RMS profiles for different damping times τd. Driving frequency f=296Hz (not an eigenfrequency)
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.05
0.10
0.15
0.20
y RMS
(mm
)
τd =0.0400 s
τd =0.0894 s
τd =0.2000 s
τd =0.4470 s
τd =1.0000 s
Figure 4.7: Top RMS profiles for different damping times τd. f=254.3644 Hz (eigen-frequency of the undamped system)
Chapter 4. Slab experiments 57
change the standing wave formation until a certain extent, since at the edges (350 mm
to 750 mm) the small maximums are not found in the large damping limit (τd = 0.04 s).
Figure 4.7 shows calculations with the same set of damping factors, but with a driving
frequency f=254.3644 Hz, which is an eigenfrequency, see table 4.1. A first observation
is the fact that the maxima in the RMS profile are about double as high as for f= 296
Hz. This can be expected since the system is simply more responsive to an input that
is an eigenfrequency. A second observation is that the pattern formation is now less
disturbed by the damping. The damping does decrease the height of the maxima, but
the qualitative shape is conserved for each bump in the profile.
Both of these effects will be relevant when using the top RMS patterns for pattern
classification, which is presented in chapter 5.
4.2.4 Influence of source position
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.05
0.10
0.15
0.20
y RMS
(mm
)
xc =0 mm xc =180 mm xc =300 mm xc =450 mm
Figure 4.8: Top RMS profiles for different source positions xc. Driving frequencyf=296 Hz (not an eigenfrequency)
Figure 4.8 shows the RMS profile of the driving signal (f=296 Hz) applied at different
source positions xc (introduced in 3.1.3). The RMS profiles differ substantially; although
the nodes and anti-nodes (maxima) are at more or less the same position, their relative
heights are heavily dependent by the source position. When xc = 0 mm, a symmetric
profile is observed, with higher peaks than the other profiles.
Chapter 4. Slab experiments 58
4.3 Attempt at geometrical nonlinearity
−600 −400 −200 0 200 400 600
x (mm)
−0.3
−0.2
−0.1
0.0
0.1
y(x,t
0)
(res
cale
d)
1.00e+02 N
4.60e+03 N
2.20e+04 N
4.60e+04 N
1.00e+05 N
Figure 4.9: The renormalized profile at t0 =1.525 s of different calculations wherethe same noisy signal was applied with different intensities Fmax. During the initial
increase from 100 N
In this section we examined the possibility to increase the input force Fmax to reach
geometrical nonlinearity. As we described in 3.1.5 the attempts at material nonlinearity
have been abandoned after calculations displayed the NaN bug, see 3.1.5. The alternative
approach was to increase the driving force and comparing the output, after rescaling
y(x, t) to y(x, t)
y(x, t) =y(x, t)
Fmax(4.4)
In the linear case, the output would scale linearly with Fmax which would render y
independent of the scaling Fmax when u(t) is equal. Geometrical nonlinearity will cause
deviations that will cause y to deform for increasing Fmax. In this regime the sum of
two signals will give a different output than the sum of the outputs.
Figure 4.9 shows the normalized y(x, t0) response profile for a specific time point t0 =1.525 s,
for different calculations with increasing values of Fmax. In the range between the first
two displayed calculations, with 100 N < Fmax < 4600 N the deviations are small.
For increasing force, y(x) differs appreciably from the low-force solution, but the cal-
culations appear to become unstable. This is clearest for the (extremely) high value
Fmax = 1 × 105 N, where the displacement is negative for all the nodes. We observe
that for the full simulation length, the displacements at the top side are predominantly
Chapter 4. Slab experiments 59
102 103 104 105 106
Fmax (N)
−1.5
−1.0
−0.5
0.0
0.5
1.0
1.5
Dis
trib
uti
onf
(∆y)
bottom
top
Figure 4.10: The Gaussian fitted distribution of the normalized top and bottomdisplacements. The cross indicates the mean displacement, which should be zero for a
stable simulation.
negative, while at the bottom side the displacements are predominantly positive. This
corresponds to a collapsed state, which is very unphysical. In a physical oscillation it is
intuitively clear that both on the top and on the bottom side the displacement should
be on average equally positive and negative.
These observations are quantified by introducing an instability measure which is plotted
in figure 4.10. For each force for which an experiment is conducted, a Gaussian distri-
bution is fitted to all the (normalized) displacements ∆ytop(x, t) =∆ytop(x,t)Fmax
= y(x, t)
and ∆ybottom(x, t) = ∆ybottom(x,t)Fmax
µtop =1
Nt Nnodes
Nt∑
i=1
Nnodes∑
j=1
∆ytop(xj , ti) (4.5)
σ2top =
1
Nt Nnodes
Nt∑
i=1
Nnodes∑
j=1
[∆ytop(xj , ti)− µtop]2 (4.6)
f(∆y) =1
σtop√
2πexp
(−(∆y − µtop)2
2σ2
). (4.7)
This Gaussian fit gives an accurate picture of the actual distribution of the normalized
displacements. In the low-force limit, it is confirmed that the average displacement is
zero for both the top and the bottom side. For the simulations with Fmax ≥ 4.6× 104 N
Chapter 4. Slab experiments 60
the simulation becomes unstable as we see the mean displacements (indicated with a
cross on top of the Gaussian) move away from 0.
We conclude to abandon the attempts at geometric nonlinearity to avoid working with
an unphysical simulation. The nonlinearity that can be reached with a stable simulation
is small. Physically, applying a total force of Fmax > 10, 000 N on the side of the slab,
causing more than 100 mm maximal displacement (not deformation), is questionable.
In retrospect, using material nonlinearity in our setup with a Mooney-Rivlin nonlinear
elastic material is a much better option since it does not require these unrealistically
large forces. This is an interesting path for further research.
4.4 Validation of linearity
In the previous section 4.3 we concluded to continue working in the low-force regime to
avoid instability. This leads us to an important question: is the slab system in low-force
regime a linear time-invariant system? A dynamical system is an LTI system if following
three conditions are fulfilled
1. Additivity: y(x, t ; ua + ub) = y(x, t ; ua) + y(x, t ; ub)
2. Homogeneity: y(x, t ; αua) = αy(x, t ; ua)
3. If u(t) = 0 then y(t) = 0
We added the input signal ua(t) and ub(t) in this notation to distinguish the response
functions. This will also be noted as ya(x, t) = y(x, t ; ua).
We demonstrate the additivity property in 4.4.1. We discussed the homogeneity in
previous section 4.3; figure 4.9 shows that in the range of low forces the solution is
scaled with the input force. The third condition is also obviously true. Therefore all
three conditions are fulfilled, which leads us to conclude that the dynamical slab system
with damping is in the low-force regime an LTI.
The description of the slab system as a linear system allows for a formulation of the
dynamics in terms of the impulse response and the transfer function, which is presented
in 4.4.2. We compare the transfer functions from different simulations and present the
possibility of replacing the simulation by a convolution, calculated efficiently through
multiplication in the frequency domain.
4.4.1 Additivity
For four different classes of signals the following experiment was conducted
Chapter 4. Slab experiments 61
1.60 1.61 1.62 1.63 1.64 1.65 1.66 1.67
Time (s)
−0.10
−0.05
0.00
0.05
0.10
Dis
pla
cem
ent
(mm
)
ya yb
1.60 1.61 1.62 1.63 1.64 1.65 1.66 1.67
Time (s)
−0.10
−0.05
0.00
0.05
0.10
Dis
pla
cem
ent
(mm
)
ya+b ya + yb
Figure 4.11: The additivity of the linear system demonstrated. The input signals uaand ub are low-passed white noise signals with a cut-off frequency of 400 Hz.
1. Generate two input signals, ua(t) and ub(t). Execute the FE calculation to find
the response functions y(x, t ; ua) and y(x, t ; ub)
2. Generate the control input signals, [ua + ub] (t) and [ua − ub] (t). Execute the FE
calculation to find the response functions y(x, t ; ua + ub) and y(x, t ; ua − ub).3. Compare y(x, t ; ua)± y(x, t ; ub) and y(x, t ; ua ± ub)
This was done for signals ua and ub pure sines of 80 and 400 Hz, block white noise, Wiener
noise and low-passed white noise with cut-off frequency 400 Hz (see 3.1.6). Figure 4.11
shows the results for low-passed white noise. It is clear from the figure that the difference
between ya+b and ya + yb is very small. We can quantize the difference as
εa+b =
√〈‖ya+b − (ya + yb)‖2〉
〈‖ya+b‖2〉(4.8)
where the ‖.‖2 means averaging the square of the argument over x, and 〈.〉 means the
time average, see (3.35) page 43.
Chapter 4. Slab experiments 62
The error rates εa+b and εa−b are plotted in figure 4.12. For all but the pure sines, the
relative error is 0.4 % or below. The most regular signals, the Wiener noise and the
low-passed noise, have a relative error of 0.28 % and below. Therefore we can conclude
that the linear approximation will yield results that are very close to the real simulation
behavior.
Puresin
e
Blocks noise
Wien
ernoise
Low-passe
d noise10−2
10−1
100
101
Rel
ati
veer
ror
(%) εa+b
εa−b
Figure 4.12: The additivity of the linear system demonstrated. The input signals uaand ub are low-passed white noise signals with a cut-off frequency of 400 Hz.
4.4.2 Transfer function
We proved in 4.4.1 that the system fulfills the requirements of linearity to a very accurate
degree. Therefore it is possible to describe the elastic wave system as a continuous field
of causal linear time invariant (LTI) systems, though discretized by the FEM mesh.
That is, each different node represents its own LTI system. Then every LTI system is
described completely by its impulse response h(x, t) or equivalently, through its transfer
function H(f, x).
Transfer function introduction
The input and output spectra are obtained through the Discrete Fourier Transform
(DFT, or equivalently the Fast Fourier Transform FFT) of the signal in the time-domain:
X(f) = F(x(t))(f).
U(f) = FFT (u(t)) (4.9)
Yu(f, x) = FFT (yu(x, t)) (4.10)
Chapter 4. Slab experiments 63
where we indicated the input as a subscript. From the input and output spectra we can
estimate the transfer function
H(f, x) =Yu(f, x)
U(f)(4.11)
= FFT (h(x, t)) (4.12)
Therefore we can estimate the impulse response for convolution from any signal through
(4.11) and inverse FFT:
h(x, t) = IFFT (H(f, t)) (4.13)
For the elastic slab system, we can be more complete with the extended transfer function
as
H(f, x ; Ω) = H(f, x ; xS, τd, ρ, Y,BC) (4.14)
where Ω represents all other parameters: the source position xS, the damping taud, the
material parameters: density ρ and elastic modulus Y, and the boundary conditions
(BC). The transfer function of one experiment is function of the frequency and the
readout node positions xR. The source positions was in the experiments in this thesis
almost always at xS = 300 mm.
Transfer function estimation
We now calculate the transfer function from a couple of different input signals: the
impulse response, Wiener noise, and low-passed noise.
The impulse response that was already used in 3.3.2 for determining the damping time
constant, will now be used to its full power: to determine the full dynamic response of
the system. As input, we can not apply an exact delta function since the simulation
works with discrete time steps. The input is:
uI(t) =
1 if t ≤ 1× 10−4 s = 2dt
0 if t ≥ 1.5× 10−4 s = 3dt(4.15)
The response of input (4.15) could already be called the impulse response, however we
should correct for the discretization error caused by the fact that the impulse response
has a finite extent. We do so by using 4.11 and 4.13.
We also estimate the transfer function HW (f, x) and impulse response of Wiener noise
from the input and output spectra UW (f) and YW (f, x). Figure 4.13 shows the two
transfer function estimations. The impulse response simulation gives the smoothest
Chapter 4. Slab experiments 64
transfer function. The Wiener estimation of the transfer function is close to the impulse
response estimation but is much noisier. The noise part decreases with longer simu-
lation length, which makes the Wiener signal estimation matcht the impulse response
transfer function more closely. There is a slight offset between the two transfer function
estimations with a factor 1.33 however. We assume this offset is because the CalculiX
simulation linearly interpolates the impulse input between the beginning (t = 0 . . . dt)
and the end (t = 2dt . . . 3dt). This difference then possibly causes the actually applied
impulse response to be different from (4.15).
We will further work with the impulse response estimation HI(f, x ; Ω), since it is
smoother and is not subject to the fluctuations due to noise. We assume it to be the
most correct reference solution.
10210−3
10−2
10−1
100
101
H(m
m/1
00N
)
Impulse
Wiener signal
102
Frequency (Hz)
0
1
2
3
4
5
6
Ph
ase
(rad
)
Impulse
Wiener signal
Figure 4.13: Transfer function at top readout side, with x=0 mm. The Wienerestimation matches the impulse estimation closely but is noisy.
4.4.3 Convolution approximation
A second measure for the quality of the LTI description of the elastic wave system
is to approximate simulations with convolutions. This will prove to be a very useful
technique, since the convolution (fastest executed through multiplication of the spectra)
is orders of magnitude faster than the full finite element simulation. The convolution in
Chapter 4. Slab experiments 65
discrete time is given by
y(x, ti) =
i∑
j=0
u(tj)h(x, ti − tj) (4.16)
where the convolution summation is limited by the fact that u(t < 0) = 0 and h(t <
0) = 0.
Figure 4.14 shows a part of a simulation approximation with convolution with the im-
pulse response, where the input was Wiener noise. The agreement is very precise, even
for the high-frequency details, but there is a systematic underestimation. With ad-hoc
multiplication of the impulse response with a factor 4/3, the convolution approximation
exactly matches the simulation result, with RMS error (4.8) 0.5 %. We assume this dif-
ference to be caused by the linear interpolation of CalculiX when it applies the impulse
loading. Since the simulation is perfectly substituted by the convolution, apart from a
factor 1.33 in amplitude, we conclude we can use the impulse response as a substitute
for any simulation.
0.55 0.60 0.65 0.70
t (s)
−0.08
−0.06
−0.04
−0.02
0.00
0.02
0.04
0.06
0.08
y(x
0,t
)(m
m)
Simulation Convolution
Figure 4.14: Simulation compared with result from convolution (4.16) for the centraltop node and as input signal Wiener noise. We assume the factor 1.33 deviation to be
explained by the linear interpolation of CalculiX when applying the impulse.
Chapter 5
Computational properties of the
linear system
The premise of this thesis was to explore the computational properties of a nonlinear
material. Nonlinearity had to be dropped since contact simulations are a computa-
tionally infeasible way to excite vibrations (section 3.1.4), attempts to include material
nonlinearity were problematic with CalculiX (section 3.1.5), and geometric nonlinearity
needed large input force which made the simulations unstable (section 4.3). In section
4.4 we established the linearity of the system. In this chapter we will set out a way to
use the linear slab system as a reservoir where the displacements on the top side of the
slab serve as state readout.
In section 5.1 we introduce the linear memory capacity and memory function. Here, the
data signal is encoded directly into the load signal u(t) with a very high sampling rate.
The memory function has a problematic interpretation because of the artificial discrete
time sampling.
In section 5.2 we introduce frequency encoding, which is a much more natural way to
transfer information to a mechanical system. The frequency f(t) is taken as the data
signal and can make discrete transitions. The loading input u(t) = sin(2π∫ t
0 f(t′) dt′) is
continuous and a physically realistic input signal for a mechanical system.
The last three sections examine how the elastic slab system processes this kind of input
signal. In 5.3 we discuss the way the slab system processes constant frequency inputs. In
5.4 we address the detection of different frequencies from the profile on the readout side.
Finally, in 5.5 we use the memory function and memory capacity measures to determine
the richness of the frequency-encoded elastic system.
66
Chapter 5. Computational properties 67
5.1 Memory capacity of elastic LTI system
5.1.1 Memory capacity introduction
Linear memory capacity is a quantitative measure to characterize information process-
ing in a dynamical system. Linear memory capacity was introduced in the context of
reservoir computing by Jaeger [7]. It was extended to linear systems in discrete time
[8] and in continuous time [9]. The extension to nonlinear memory capacity suggests
that information processing capacity is an inherent property of any dynamical system
[10]. The linear memory capacity quantifies the ability of a system to store a temporal
sequence. It quantifies the ability of retrieving past inputs from the instantaneous state
of the system.
The system evolves in discrete time as
X(tn) = F (X(tn − 1), u(tn)) (5.1)
where we introduced the vector X to contain the full state of the system, i.e. the nodal
displacement vector X and all the nodal velocities. The time signal u(tn) is the usual
input signal, sampled at discrete timesteps tn. To shorten notation, we will also note
this discrete-time version of u as u(n). In our setup, we observe a part of the full state
of the system: the y-directed displacements at the readout side y(x, t), which we studied
in the previous chapters.
For the remainder of this section we will use the following notation: n=1 . . . N indicates
the timesteps of the simulation, thus Ndt is the total simulation time. The discrete
readout nodes are labeled with i=1 . . . M, thus M=99 nodes in our slab setup. Now
we assemble the readout in a matrix X with dimension (N × M). The elements of
the matrix X are [X]n,i = y(xi, tn). With this notation, X is the conventional matrix
containing the readout of the system state, and is a part of the full system state matrix
X.
We now want a measure for the ability of the readout matrix X to reconstruct the input
u of k timesteps ago: u(n − k). We introduce zk(n) as the optimal reconstruction of
u(n− k) over all n in the dataset. Consider the linear estimator Zk1
ZkN×1
= XN×M
WkM×1
(5.2)
1Wk only has dimension M × 1 when assuming the input and output to be zero-mean. When theinput and output are not zero-mean (like in section 5.5), a bias term has to be added. Then Wk hasdimension (M + 1)× 1
Chapter 5. Computational properties 68
where Wk is the weight matrix that minimizes the MSE between zk(n) and u(n− k).
Then the memory function is defined for an i.i.d. input u is defined as in [7]
m(k) =cov2(u(n− k), zk(n))
σ2(u(n))σ2(zk(n))(5.3)
m(k) is thus the square of the correlation coefficient between u(n − k) and zk and
represents the fraction of the variance in the (delayed) signal at time n− k that can be
explained by the instantaneous state of the system at timestep n. m(k) takes values
between 0 and 1. By plotting m(k) as a function of tk = kdt, the memory function gives
information about the dynamics of the system: peaks in the profile plot at time tk show
that the state of the system contains more information about the input of tk ago.
A fundamental result is that the linear memory capacity, defined as the sum of the
memory function, of a dynamical system is bounded:
∞∑
k=1
m(k) ≤M (5.4)
where the equality is reached for a linear system.
It is of interest to obtain an expression for the memory function for our linear time-
invariant systems defined by impulse response function h(x, t). This is done in a ap-
pendix A. The discretized version of the impulse response, matrix G, is introduced as
G(N ′×M)
=
h(x1, t1) h(x2, t1) . . . h(xM , t1)
h(x1, t2) h(x2, t2) . . . h(xM , t2)...
.... . .
...
h(x1, tN ′) h(x2, tN ′) . . . h(xM , tN ′)
(5.5)
and Gk is the kth row of this matrix, containing the impulse response on time tk, with
1 ≤ k ≤ N ′.
We finally obtained a form in (5.6) that is only a function of the system’s impulse
responses matrix G
m(k) = GTk (GT G)−1Gk (5.6)
Here (GT G) is similar to a covariance matrix, except for the factor 1/N ′. The matrix
(GT G) also has the same form as the covariance matrix of a random vector except for a
factor 1N ′ . Since the impulse response is exponentially damped, this inner product will
not be dependent on the tail of the impulse response and can be safely truncated for a
certain N ′ >> τddt . This means that the (GT G) matrix is independent of N ′, and not
the covariance matrix 1N ′ (G
T G).
Chapter 5. Computational properties 69
5.1.2 Memory function plots
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07
Delay time tk
0.00
0.02
0.04
0.06
0.08
0.10
0.12m
(k)
4.60 ms
17.20 ms
Matrix expression
Explicit calculation
Figure 5.1: Memory function of the slab wave system from the matrix expression(5.6) and from explicit calculation, for damping time τd = 0.2 s.
We compare the memory function obtained from matrix expression (5.6) with the full
calculation of the memory function. Figure 5.1 contains the memory function generated
from the full calculation, meaning that a long input vector of i.i.d. uniform random
numbers is generated (here for example: 200,000 samples) and the output of the system
is calculated through convolution. Then the definition (5.3) is used to obtain the memory
function. This full calculation takes about 20 minutes to complete, while the evaluation
of the matrix expression for the delay ranging from 0 s to the full length of the impulse
response (2.0 s) takes less than one second. As the figure suggests, the two capacities
correspond almost exactly. We attribute the slight difference to the approximation made
in the full calculation of a finite input and output vector.
We can link the memory function with a new timescale, which we will call the roundtrip
time τr. We define the roundtrip time as the time needed to travel from the top to the
bottom and back, τr ≈ 2Lv = 12.9 ms. Here we got the wave speed from the material
properties: v =√
Yρ = 233× 103mm/s, properties see table 3.4.
This roundtrip time matches the difference between the two peak positions indicated
with arrows: τr ≈ (17.2 − 4.6) ms = 12.6 ms. There appears to be a discrepancy
however, with the time it takes for the signal to travel from the bottom to the top,
4.6 ms. The meaning of the travel time is very clear in the first 4.6 ms in figure 5.1:
Chapter 5. Computational properties 70
before that time the input is completely uncorrelated with the readout side, after that
time the signal arrives and m(k) shows its largest peak. We propose a possible solution
for this discrepancy: if the real roundtrip time is 4.6 ms, meaning a faster speed of
dilatational waves, the first peak matches. The difference between the two peaks is then
larger than the roundtrip time, since it is constructed from the signal that has spread
in all directions. Then the average path length is longer than 2L, and this fastest travel
can be recognized in the moment the negative slope is reversed to a positive slope.
0.00 0.05 0.10 0.15 0.20 0.25
tk(s)
0.00
0.05
0.10
0.15
0.20
0.25
m(k
)
τd = 0.0285 s
τd = 0.200 s
Figure 5.2: Memory function for two different damping factors. Strong dampingτd = 0.0285 s gives a higher peak but shorter extent.
Another interesting aspect is the damping timescale. Two memory functions from sys-
tems with different damping are shown in figure 5.2. The decay time τd clearly defines
the decay of the memory function, while the roundtrip time τr stays conserved as it is
a property linked to the size and wavespeed of the slab instead of the damping.
Now through the upper limit property (5.4) the memory capacity MC =∑∞
k=1m(k) ≤M . In this case, the system is perfectly linear and the upper limit is reached within
an accuracy of 1× 10−10. The upper limit defines a trade-off between the maximum of
m(k) and the extent of the memory profile. The memory function for the simulation
with high damping (τd = 0.0285 s) has a much higher peak memory at τr/2, but decays
much faster. This trade-off predicts that for higher damping, input can be recognized
more precisely, but the older input history fades more rapidly.
5.1.3 Interpretation
The memory function helps us to indicate the roundtrip time in the material as the time
delay for which the state of the readout side is mostly correlated with the time delayed
input. It is also able to visualize the influence of damping.
Chapter 5. Computational properties 71
However, one should be aware of the effect of the discretization. In calculating the
memory function, white noise was generated with the same sampling frequency as the
impulse response. This is an arbitrary choice, but a choice that defines the height of
the memory function: doubling the sampling rate will approximately halve the height of
the memory function. This makes sense, since a more rapidly varying signal is injected
during the same amount of time, so the reconstruction of this signal is harder. Another
discretization issue that deserves attention is the spatial discretization by the mesh.
The upper limit of the total memory capacity is the number of readout nodes, which
is dependent of the mesh discretization. However, if the simulation has converged,
increasing the number of nodes should intuitively not increase the information contained
in the system. In fact, the continuum system is then described by interpolation between
the nodes and contains thus infinitely many places to read out the impulse response.
However it is unlikely that an increasing density of readout positions would increase the
memory capacity infinitely, rather it should saturate when the mesh is dense enough
to approximate the solution very accurately. This will happen when the responses are
linear dependent, so the covariance matrix does not have full rank and the upper limit is
not reached anymore. These issues urge for a reformulation of the memory capacity in
continuum space and continuum time (the latter has been done in [9] for RNNs, however
there the differential equations are known in matrix form).
A second question is how this result might be extended to nonlinear systems. The
extension of the transfer function and impulse response in the nonlinear regime can be
an option for mildly nonlinear systems like the slab system with geometric or material
nonlinearity. The nonlinear extension is not uniquely defined: one could use the Volterra
or Wiener series approximation [37], or the nonlinear normal modes approach [38]. Under
these extensions, an expression for memory function similar to (5.6) might be feasible.
Finally, one might investigate the robustness of this memory capacity under model order
reduction [39, 40].
5.2 Frequency encoding
In the discussion of the memory capacity, the input signal was noise sampled with an
arbitrary but very high sample rate. In mechanical systems, this kind of input signal is
unrealistic: in most situations one would expect the force to vary smoothly. The next
idea could be to apply the data signal on u(t) by holding each data value constant for a
certain time, which we call the hold time Th. Consider the limit of Th →∞, then a static
force is applied. The result will be almost undetectable: steady state is reached and the
Chapter 5. Computational properties 72
static deformation will have the same shape but a different amplitude for different values
of the static force.
We conclude that the idea of direct encoding of a data sequence on u(t) is flawed.
Encoding a signal this way does not employ the dynamics of the system. When the
sampling rate is high enough, waves will be generated from the transitions. The resulting
patterns are then associated with transients of switching, rather than with the actual
applied data signal.
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
t (s)
050
100150200250300
f(t
)
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
t (s)
−1.0
−0.5
0.0
0.5
1.0
u(t
)
Figure 5.3: Frequency coding of discrete signals. Here Th = 0.02 s. At the endof each constant part, before the jump, the yRMS output is constructed from a short
integration time tI .
We now propose an alternative approach: frequency encoding. We already introduced
waves and vibrations as the natural language for mechanical systems. We propose to
use this as a new point of view on how to encode information in the system. Frequency
coding offers the possibility of encoding discrete signals on the elastic system, as shown
in figure 5.3. Let f(n) be the discrete data sequence we want to encode. With some
flexibility of notation, we write f(n) for the discrete sequence and f(t) for its continuous
version where we hold each frequency for a hold time Th.
f(t) = f(n) ; nTh < t < (n+ 1)Th (5.7)
As discrete readout, we take the RMS profile from integration of continuous-time readout
y(x, t) over an integration window of size ∆tI = 0.02 s, right before the frequency jumps.
yRMS(n) =
(1
∆tI
∫ (n+1)Th
(n+1)Th−∆tI
(y(x, t))2 dt
) 12
(5.8)
Chapter 5. Computational properties 73
The frequency signal f(t) is encoded to load signal u(t) as
u(t) = sin
(2π
∫ t
0f(τ)dτ
)(5.9)
Encoding a signal in the frequency of the input signal has three main advantages:
1. Physically realistic, no discontinuous loading jumps.
2. Uses the natural vibrational language of the elastic system
3. Readout of RMS profile only depends on the vibration amplitude and not the
vibration frequency.
Table 5.1: Comparison: Direct encoding of signal versus frequency coding.
Direct Frequency encoding
Input u(t) f(n) 7→ u(t)Readout y(x, t) yRMS(n)Signal u(t) Discontinuous jumps Continuous signal
In the next three sections we will perform experiments to explore the frequency coding
idea. In 5.3 we will discuss the way the slab system processes constant frequency inputs.
In 5.4 we will discuss the dynamic processing of frequency jumps like the ones of figure
5.3. In 5.5 we discuss the memory function and memory capacity for frequency coded
signals.
5.3 Spectral sensitivity
In the previous chapter, section 4.4.2, we introduced the transfer function. Here we will
examine the form of this transfer function as a function of the readout position x, the
constant input frequency f and the damping τd. The transfer function determines what
we call the spectral sensitivity: what part of the top side of the slab is sensitive to each
frequency.
5.3.1 Average transfer function
The first thing to examine is the average of the moduli of the transfer function over
the nodes at position x1 . . . xNnodes. We take the geometric average since the transfer
Chapter 5. Computational properties 74
function is always studied in the logarithmic domain. This will give an average transfer
function of the system Hav(f)
Hav(f) =
(Nnodes∏
i=1
|H(xi, f)|)1/Nnodes
(5.10)
Figure 5.4 shows the resulting average transfer function. The peaks of the transfer func-
tion are indicated and have the same positions as some of the eigenfrequencies, see table
4.1. This means that on average, the system is more sensitive to some specific eigenfre-
quencies. It is important to realize however, that not all eigenfrequencies correspond to
average transfer function peaks. Some eigenfrequencies correspond to very low relative
maxima, or have no maximum at all.
102
f (Hz)
10−3
10−2
10−1
100
101
102
Hav(f
)(m
m/1
00N
)
42.585.0
98.0117.5
129.5
Figure 5.4: Geometric average of the transfer function over the nodes. The peaks areannotated and correspond to eigenfrequencies, see 4.1.
5.3.2 Nodal sensitivity
The average transfer function determines the system’s average sensitivity to a certain
frequency, and is only function of the frequency. We now take the position as variable
for studying the transfer function variation over the nodes. The result is shown in figure
5.5. This figure shows on a logarithmic scale the response to a number of frequencies.
It is clear that most eigenfrequencies will give rise to large displacements in a very
simple, almost symmetric pattern. In contrast, not all eigenfrequencies give rise to large
Chapter 5. Computational properties 75
displacements: for example 228.5 Hz is an eigenfrequency of the system, but does not
give rise to large displacements. The higher frequencies that are not an eigenfrequency
are more irregular, asymmetric and have much lower maximal values. They are not
clearly dominated by one eigenmode and are more dependent on the position of the
source.
−600 −400 −200 0 200 400 600
x (mm)
10−3
10−2
10−1
100
101
H(f
,x)
42.5 Hz (eigen)
98.0 Hz (eigen)
110.0 Hz
217.5 Hz (eigen)
224.0 Hz
228.5 Hz (eigen)
Figure 5.5: |H(fi, x)| Transfer function as function of position in logarithmic scale.This corresponds with the steady state profiles. Eigenfrequencies mostly have a much
higher response amplitude.
The transfer function, plotted over the nodes, gives the steady state RMS profile for a
certain frequency. This shows the power of the transfer function to easily visualize the
steady state behavior of the whole system, while obtained from one simulation.
5.3.3 Spectral sensitivity colorplot
In 5.3.1 and 5.3.2 the transfer function was plotted in function of frequency and position
respectively. Figure 5.6a shows the transfer function colorplot, where the position is on
the x-axis and the frequency is on the y-axis. This colorplot gives a complete overview
of the frequency response of the system. For comparison, a colorplot of the system with
a much higher damping parameter is given by figure 5.6b.
Note that the scale of the plot is artificially reduced to [−1, 1]. The color is coded
according to f(log |H(f, x)|), where f(.) is the squashing function constructed by the
cumulative distribution of the values of log |H(f, x)|. The full range of transfer function
Chapter 5. Computational properties 76
−600−400−200 0 200 400 600
x (mm)
50
100
150
200
250
300
350
400
Fre
qu
ency
(Hz)
−1.0
−0.8
−0.6
−0.4
−0.2
0.0
0.2
0.4
0.6
0.8
1.0
(a) τd = 0.2
−600−400−200 0 200 400 600
x (mm)
0
50
100
150
200
250
300
350
400
Fre
qu
ency
(Hz)
−1.0
−0.8
−0.6
−0.4
−0.2
0.0
0.2
0.4
0.6
0.8
1.0
(b) τd = 0.0285 s
Figure 5.6: Transfer function colorplots for low (left) and high (right) damping
values is squashed to the interval [−1, 1] so that the resulting values are distributed
evenly over this interval.
The qualitative picture given by the colorplots is interesting: in the high damping case,
details get smeared out and are less distinguishable. On the other hand, the eigenfre-
quency spots of high transfer function amplitude stay more or less conserved in position
and size. This corresponds to the observation made in the section about steady state
RMS profiles under influence of damping (see 4.2.3): the position of the nodes and
maxima does not change much.
5.4 Dynamical classification
In section 5.1 we used u(t) directly to encode information. This had serious drawbacks
like an artificial sampling time, unrealistic jumps in the loading and a small peak memory
function. Therefore we introduced the frequency coding idea in 5.2: to encode the data
signal in the frequency f(t). The output profiles were examined for constant frequency
input in 5.3 by visualizing the transfer function |H(f, x)|. However, the visualization
of |H(f, x)| allows an interpretation of steady state patterns when a regime is reached,
but does not readily supply information about the dynamic behavior. The aim of this
section is to introduce the dynamical properties of the frequency coded system, when
the signal is not a constant frequency.
This is a different use case from the steady state profiles: the system is in a random
state of displacements and elastic waves which is generated by its input history. We
Chapter 5. Computational properties 77
now want to determine how fast the system can detect a new input signal and how
this detection time depends on the parameters. Therefore we perform a number of
classification experiments for input with changing frequencies. In 5.4.1 we introduce the
experiments and the detection time quantity. In 5.4.3 we discuss the role of the choice
of frequencies to classify. Finally in 5.4.4 we discuss the influence of material damping.
For the experiments performed in this section, we replaced all simulations by a convo-
lution of the input with the impulse responses, through FFT and inverse FFT. Another
concept we will use extensively is the RMS profile, which is introduced in 3.1.2 equation
(3.2). The RMS profile yRMS contains the square root of the energy on each node for a
certain time interval. One RMS profile will be represented by a vector with dimension
1×M where in our setup M=99, the number of nodes. A set of N profiles will be denoted
as X with dimension N ×M .
5.4.1 Classification and detection time
We will perform classification of the frequencies based on the RMS profile. The impor-
tant quantity we investigate here is the detection time, which indicates how long it takes
between the moment of switching the frequency, and the moment of correct classification
of this frequency. For these experiments we divide the frequency domain into 10 classes.
Then we train a set of 10 linear least squares classifiers for each class. For the test set
we use these 10 classifiers and apply winner takes all to decide the class of the RMS
profile [41, p. 184].
The training set was generated from a sine with constant frequency, where we added the
standard Wiener noise (see 3.1.6). The Wiener noise has integration time τW = 0.005 s
which corresponds to a cut-off frequency of 32 Hz. We construct the training dataset
by generating the RMS profiles for a constant frequency f plus Wiener noise. The pure
signal is uf (t) = sin(2πft) and the Wiener noise is uW (t), which both have maximal
amplitude 1. The input signal to generate the training set is
uf (t) = η uf (t) + uW (t) (5.11)
where η is the signal to noise ratio, determining the amplitude of the signal to the
amplitude of the noise.
After a warm-up time of 0.5 s a number of RMS profiles are recorded as training values.
These RMS profiles are obtained from integrating over a short integration time window
∆tI=0.02 s, spaced apart with a sample time tS= 0.1 s, to allow each RMS profile to have
Chapter 5. Computational properties 78
different noise influences. An example of the different training profiles for f = 300Hz
are shown in figure 5.7, for a training set with η = 0.4.
−600 −400 −200 0 200 400 600
x (mm)
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
y RMS
(mm
)
Figure 5.7: Training RMS profiles for f=300 Hz. The profile is obtained by the RMSintegration over ∆tI=0.02 s. The input signal has signal to noise ratio η = 0.4
In the experiments about the detection time, we assume the parameters (e.g. signal to
noise η) are chosen such that the frequency can be classified correctly from the test RMS
profile. In the testing phase, the input signal will be a frequency coded signal based on
signal f(t), as described in section 5.2. The pure input signal u and actual input signal
with Wiener noise are
utest(t) = sin
(2π
∫ t
0f(τ)dτ
)(5.12)
utest(t) = η utest(t) + uW (t) (5.13)
The frequencies will jump between the discrete frequencies from the classes, which is
plotted in the top panel of figure 5.8. The hold time is Th = 0.4 s.
To perform classification, a dense number of RMS profiles are constructed. These win-
dows still have size ∆tI=0.02 s but are spaced apart with a much smaller sampling time
tS= 0.005 s. This allows for an almost continuous generation of RMS profiles and thus
almost continuous time for frequency detection. It should be considered however, that
∆tI is the lowest possible time after which we can expect to capture the effect of a
new frequency. We denote the detected frequencies as function of time as z(t) as be-
fore. This best estimator is now not constructed through linear regression but through
classification.
Chapter 5. Computational properties 79
11.0 11.5 12.0 12.5 13.0
t (s)
0
100
200
300
400
500
600
Tar
get
freq
uen
cyf
(t)
11.0 11.5 12.0 12.5 13.0
t (s)
0
100
200
300
400
500
600
Det
ecte
dfr
equ
ency
z(t
)
Figure 5.8: Test set input frequency f(t) (upper panel) and detected frequencies z(t)(lower panel). The signal to noise parameter η = 1. The frequency jumps are indicatedwith dashed lines. The time before the new frequency is detected is seen in the lower
panel. This is detection time Td.
Now we introduce the detection time Td per frequency jump as the time it takes to
identify the RMS profile for the correct frequency. It is easily seen in the lower panel that
this is the time between the dashed vertical indicator and the moment z(t) = f(t). Figure
5.9 shows the detection times for different classification frequencies for an experiment
with η = 1 and damping τd = 0.2 s. From this figure it is clear that the detection time is
dependent on the frequency. This already indicates that some frequencies have a more
distinguished pattern than others.
5.4.2 Influence of noise
Figure 5.10 shows the detection time in function of noise parameter η. For this experi-
ment we divided the frequency domain in 10 evenly spaced target frequencies of 100 Hz
to 550 Hz with an interval of 50 Hz. The upper limit for Td is the time f(t) stays con-
stant per frequency, 0.4 s. Td = 0.4 s indicates that detected frequency z does not stably
Chapter 5. Computational properties 80
0 100 200 300 400 500 600
Frequency (Hz)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
Det
ecti
onti
meTd
(s)
Figure 5.9: Detection times for different classification frequencies. The signal to noiseparameter η = 1. It is clear that the detection time is dependent on the frequency.
10−1 100 101
Signal to noise ratio η
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Det
ecti
onti
meTd
(s)
Average Td
Upper limit 0.4 s
f=100 Hz
f=200 Hz
f=300 Hz
f=400 Hz
f=500 Hz
Figure 5.10: Detection time in function of noise parameter η. The colored dashedlines are the Td for different frequencies, the thick black line is the average.
Chapter 5. Computational properties 81
correspond to the target frequency f at the end of the 0.4 s. It is clear that different
frequencies have very different robustness against noise. The trend appears that lower
frequencies (100 to 250 Hz) are completely unrecognizable with a high amount of noise
(Td = 0.4 s when η = 0.1). The higher frequencies take a longer time to be detected (it
takes a while before their RMS profile with many bumps is built up), but they are more
robust against high noise.
Around η = 1.0 the average detection time saturates to a value around Td = 0.12 s. For
a reference, this Td is about 10 roundtrip times of the slab, and about 0.5× τd.
5.4.3 Eigenfrequencies
10−2 10−1 100
Signal to noise ratio η
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Det
ecti
onti
meTd
(s)
Average Td
Upper limit 0.4 s
f=98.0 Hz
f=214.7 Hz
f=298.8 Hz
f=383.1 Hz
f=489.5 Hz
Figure 5.11: Detection time in function of noise parameter η for eigenfrequencies.The colored dashed lines are the Td for different frequencies, the thick black line is theaverage. Note that even for very high noise (η = 0.1, 10% signal) all frequencies are
detected. Note the different axis limits compared with the previous figure.
In section 5.3 certain eigenfrequencies of the system are identified as the peaks in the
average transfer function. For these frequencies the steady state RMS profiles will show
large displacements. Therefore we expect these frequencies to be easily detectable, even
with a high amount of noise.
To verify this idea, we selected the classification frequencies as frequencies where the
average transfer function peaks: f=[98.02, 159.57, 214.72, 248.22, 298.76, 351.3, 383.1,
453.1, 489.5, 553.0] Hz. This is in contrast with the equally spaced frequencies of figure
Chapter 5. Computational properties 82
5.10. Our intuition is confirmed in figure 5.11. Using the high-throughput frequencies
(peak frequencies) the amount of noise that can be endured is much higher. The sat-
uration Td for the best eigenfrequencies is reached for signal to noise ratio η ≈ 1/10
while for the other frequencies saturation was only reached at η = 1. It should be noted
that not every eigenfrequency can be selected to give this strong response, rather these
frequencies were selected from the peaks of Hav, see figure 5.4.
5.4.4 Influence of damping
10−1 100
Damping time τd (s)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
Det
ecti
onti
meTd
(s)
Average Td
Upper limit 0.4 s
f=98.0 Hz
f=214.7 Hz
f=298.8 Hz
f=383.1 Hz
f=489.5 Hz
Figure 5.12: Detection time in function of damping time τd. The high damping limitis on the left side, where the decay time constant τd is small. The target frequencies arethe peak frequencies of the average transfer function. The vertical black line indicates
τd of the previous experiments.
Finally, we performed the same classification experiment to find Td, but for the slab
system with different damping time constants. As target frequencies, the same peak
frequencies (eigenfrequencies) are taken as in the previous paragraph, since we confirmed
that for these frequencies the detection is much more robust. The signal to noise ratio
is 0.6, which is a safe amount of noise.
The experiments’ result is plotted in figure 5.12, and shows that for high damping, the
new input frequencies can be detected much faster. This can be explained by the fact
that pre-existing vibrations get damped out much faster. Yet is important to realize
that, despite the high damping, the RMS profiles are still different enough for the linear
Chapter 5. Computational properties 83
classifier to correctly classify each pattern after this short detection time. In the simu-
lation with strongest damping, when τd = 0.01 s, the detection time is 20.6 ms which
is clearly limited by the time-window precision ∆tI = 20 ms. This is only about two
periods of the lowest frequency (98 Hz) and less than two times the roundtrip time for
a wave to travel back and forth in the slab.
5.5 Memory capacity for frequency encoded signals
We discussed the memory capacity for the LTI system in section 5.1. There the input
signal was very high-frequently sampled white noise, the output was the instantaneous
displacement y(xi, tn). This was more a theoretical construct: this kind of loading has
little to do with reality. The memory function for this signal is low: through linear
estimation one can not reconstruct much of the original signal.
Now we will use the frequency encoding as described in sections 5.2, 5.3 and 5.4. We
perform an exploratory experiment to determine the memory function and memory
capacity for this encoding.
Experiment
We encode a discrete sequence of frequencies f(n), randomly sampled between 100 and
600 Hz in the way described in figure 5.3. We hold each frequency for a hold time
Th = 0.05 s. This is about 4 times the roundtrip time τr and according to figure 5.12,
for high damping it is long enough to classify between frequencies. As readout, we
take the RMS profile from integration over an integration window of size ∆tI = 0.02 s,
right before the frequency jumps. No noise is added. We construct a sequence f(n)
and yRMS(n) with n=0 to 10,000. The experiment is repeated for 12 different damping
times τd, logartimically chosen over [0.02, 1.0] s.
The memory function (5.3) and memory capacity (5.4) for this sequence is computed.
Notice that m(0) is the correlation between the linear estimation of the frequency f(n)
based on the RMS profile it generated at the end of the hold time Th.
Memory function and capacity
Figure 5.13 shows the memory function for delay time k. It is clear that the same effect
plays as in section 5.1: for higher damping, the peak in m(0) is higher, but the extent
is shorter. We see that for frequency encoding and high damping, m(0) is almost one.
Chapter 5. Computational properties 84
0 2 4 6 8 10
Delay k
0.0
0.2
0.4
0.6
0.8
1.0
m(k
)
τd = 0.0285
τd = 0.0580
τd = 0.1180
τd = 0.2410
τd = 0.4910
τd = 1.0000
Figure 5.13: Memory function for different damping times τd.
This means that we can use the RMS profile and simple linear regression to reconstruct
the input frequency with a high accuracy, even for a short hold time Th. We also see
that for lower damping, we have considerable memory over multiple Th lengths.
10−1 100
τd (s)
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
MC
Figure 5.14: Memory capacity in function of τd.
In figure 5.14 we plotted the memory capacity in function of the damping time. The
memory capacity reaches a maximum for damping time τd = 0.241 s. Notice for high
damping the immediate reconstruction possibility is highest but the memory capacity is
not maximal.
Another remark is that the upper limit of MC=99 is not reached by far. We attribute
this to the information loss caused by the RMS integration and the fact that not all
y(x, t) readout from the LTI system is used. The upside is that this is a realistic way of
Chapter 5. Computational properties 85
encoding information into the physical system, and is the natural way how vibrational
information would be transferred to mechanical systems in real-life applications.
Chapter 6
Conclusion
During this thesis we explored how an elastic slab system could be used to process infor-
mation. The main conclusion is that the system performs a mapping of the frequency to
the spatial domain. Therefore we propose to encode information in a mechanical elastic
system by frequency encoding, and reading out the vibration amplitude.
Chapter 2 introduces the finite element method as an analysis tool for elastic wave
problems. pyFormex and CalculiX are the software tools we use for the model generation
and calculation respectively. Due to technical difficulties with CalculiX we decided to
work in the linear regime, implying that the whole system can be described as an LTI
system.
In Chapter 3 we introduced the elastic system we used for information processing: a
simple rectangular slab, loaded by a single input signal u(t). The upper side of the slab is
used to read out the displacements. It was clear that material damping was an essential
property for the elastic system to be well-behaved, so this had to be implemented ad-hoc
in CalculiX. The convergence of our calculations was checked as an essential part of any
finite element analysis.
Chapter 4 is a familiarization with the slab system. The eigenmodes and eigenfrequencies
are introduced. We investigate the effect of sinusoidal loading with a constant frequency
and make the important observation that different frequencies map to different profiles
on the top side.
This idea proves to be essential in chapter 5. Here we suggest that instead of direct
encoding of a signal in the input force u(t), a better idea is to encode the data signal
in the frequency. This conserves the natural language of the elastic system by apply-
ing a smoothly varying force loading, even when the data signal is discontinuous. We
examine how the elastic slab system processes this kind of input signal. In section 5.3
86
Chapter 6. Conclusion 87
the amplitude of the response to a constant frequency is visualized with colorplots of
the transfer function. We establish that the average transfer function identifies some
eigenfrequencies as high-throughput frequencies, i.e. the system will respond to these
frequencies with high amplitude vibrations.
In section 5.4 and 5.5 we investigate the dynamics of encoding a discrete signal on
the slab system. We introduced a simple classification task of detecting the switching
between a discrete set of varying frequencies at the hand of the amplitude profile on
the readout side. We conclude that the time it takes to detect a frequency switch will
increase by adding noise, and decrease when stronger damping is applied. By using the
specific set of eigenfrequencies determined by the peaks in the average transfer function,
we can drastically improve the detection performance. Finally, through the memory
function and memory capacity of the system with frequency-encoded discrete signals we
sketched the behavior as a discrete reservoir. Here we concluded that high damping
improves possibility for linear regression but decreases the extent of the memory of the
system. The total memory capacity reaches a maximum for intermediate damping.
Outlook
This thesis was an exploration of both the finite element method for nonlinear elastic
simulations, and the characterization of a linear elastic material as a reservoir. The
analysis of the elastic system as a reservoir is far from complete. In the first place, the
system here was described in discrete time and in discrete space through the FEA mesh
discretization. Extension to continuous space and time is nontrivial.
An interesting path for further research is to investigate the memory function and ca-
pacity for frequency-encoded discrete signals with relation to the different parameters
of encoding and the slab system. This measure can shed light on the relation between
the roundtrip time τr, the damping time τd and the hold time Th. Notably decreasing
the hold time Th and increasing the driving frequencies and the FEM mesh density is
expected to improve the memory capacity. Also varying the readout to a number of
discrete timepoints per sample might lead to strong improvements.
This work was exploratory, and many extensions are possible. One possibility is to
expand to nonlinear materials, when it is computationally feasible. Also the extension to
different geometries and different boundary conditions is an interesting path to explore.
Appendix A
Memory function of LTI systems
Since a lot of dynamical systems in engineering practice are described by linear time-
invariant systems, among which our elastic slab system in the low-force regime, it is of
interest to investigate the expression for memory functionm(k) in function of the impulse
response. In 4.4.2 equation (4.13) we introduced the impulse response to characterize
the system. The impulse response is a function of time and position, and is measured
in the nodes of the finite element mesh for the discrete timesteps of the simulation.
Therefore we can assemble the discrete impulse responses in a matrix G
G(N ′×M)
=
h(x1, t1) h(x2, t1) . . . h(xM , t1)
h(x1, t2) h(x2, t2) . . . h(xM , t2)...
.... . .
...
h(x1, tN ′) h(x2, tN ′) . . . h(xM , tN ′)
(A.1)
The matrix G describes the full dynamics of the system until time tN ′ = N ′dt.
From equation (4.16) we know that we can construct every nodal output as a convolution
with the nodal impulse response:
y(x, tn) =n∑
j=0
u(tj)h(x, tn − tj) (4.16, rev)
The input and output signal have length N, which we will later assume to be very large,
N →∞, since we will eliminate the input signal. In contrast, the length of the measured
impulse response N ′ is large but finite, for example N ′ = 40, 000 for a measured impulse
response of 2.0 s with timestep dt.
88
Appendix A. Memory function of LTI systems 89
Assembling these nodal displacements in the vector X as before, we can write 4.16 as a
matrix multiplication:
X = U G (A.2)
where we introduced the (N ×N ′) matrix U, containing on each row the inverted, time-
shifted and truncated signal to calculate the convolution
U =
u(1) 0 0 . . . 0
u(2) u(1) 0 . . . 0
u(3) u(2) u(1) 0...
. . ....
u(N ′) u(N ′ − 1) u(N ′ − 2) . . . u(1)
u(N ′ + 1) u(N ′) u(N ′ − 1) . . . u(2)...
...
u(N) u(N − 1) u(N − 2) . . . u(N −N ′)
(A.3)
We now assume that the input values u(n), n = 1, 2 . . . N are i.i.d. drawn from the
uniform distribution on [−1, 1]. Furthermore, we assume that the impulse responses are
measured for as long as the input vector was generated. In reality, the impulse responses
will be truncated when they decay below a certain point.
We will now express the weight matrix Wk (for optimal linear estimation of u(n − k)
and thus construction of m(k)) in function of the output matrix X and in function of
the system’s dynamics matrix G. Wk is the least squares solution of:
0 . . . 0...
...
0 . . . 0
X(k+1),1 . . . X(k+1),M
X(k+2),1 . . . X(k+2),M
......
XN,1 . . . XN,M
Wk =
0...
0
u(1)
u(2)...
u(N − k)
(A.4)
Where the first matrix on the left hand side is a truncated version of the full output
matrix X, where the first k rows are set to zero. This matrix can be written as Sk X
where
Sk = diag(0 . . . 0︸ ︷︷ ︸k
, 1 . . . 1︸ ︷︷ ︸N−k
) (A.5)
Appendix A. Memory function of LTI systems 90
The right hand side of (A.4) contains the kth column of matrix U (A.3), which we will
further denote as Uk. Together, Wk is determined from
SkX Wk = Uk (A.6)
With least squares solution
Wk = (XT STk SkX)−1XT Sk Uk (A.7)
= (GT UT STk Sk U︸ ︷︷ ︸=Λk
G)−1GT UT Sk Uk︸ ︷︷ ︸=N−k
3δk
(A.8)
We elaborate the first underbraced part Λk. Each element of (Sk U)TSk U is an inproduct
of two vectors, where the vectors are the columns of the truncated input matrix
(Sk U) =
0 . . . 0...
...
0 . . . 0
u(k + 1) u(k) . . . u(1) 0 . . . 0
u(k + 2) u(k + 1) . . . u(2) u(1) 0...
.... . .
...
u(N ′) u(N ′ − 1) . . . u(N ′ − k) u(N ′ − k − 1) . . . u(1)...
......
u(N) u(N − 1) . . . u(N − k) u(N − k − 1) . . . u(N −N ′)
(A.9)
The off-diagonal elements are an inproduct of two time-shifted versions of the input u
(thus the autocorrelation). Since the input is i.i.d. the autocorrelation is 0.
The diagonal elements of Λk are the inproduct of (a part of) u(n) with the same time
shift. The length of these vectors varies between N −k for the first k columns to N −N ′
for the last column. In any case, since we take N →∞, the inproduct matrix becomes
Λk = (Sk U)TSk U (A.10)
= N σ2(U) IN (A.11)
= γ IN (A.12)
with IN the identity matrix of size N. We also introduced γ = N3 ≈ N−k
3 .
Appendix A. Memory function of LTI systems 91
The last underbraced part of (A.8) is a similar inproduct
(Sk U)T Uk = UT Uk (A.13)
=N − k
3δk (A.14)
= γ δk (A.15)
where δk is a N ′ × 1 vector with all zeros but the kth element equal to 1. Then mul-
tiplication with GT , dimension (M × N ′), selects one specific timestep of the impulse
response on all nodes
GTUT Uk = γ Gk (A.16)
where we note Gk as the (M × 1) vector containing the impulse response at timestep k
over all nodes:
Gk = [h(x1, tk) . . . h(xM , tk)]T (A.17)
This reduces (A.8) to the form
Wk = (GTΛkG)−1 γ Gk (A.18)
= (GT G)−1Gk (A.19)
(A.20)
Here we verify that (GT G) is indeed nonsingular since the impulse responses will be
unique for each node, in the case of an asymmetric loading, xc 6= 0. The matrix (GT G)
also has the same form as the covariance matrix of a random vector except for a factor1N ′ . Since the impulse response is exponentially damped, this inproduct will not be
dependent on the tail of the impulse response and can be safely truncated for a certain
N ′ >> τddt . This means that GT G is the measure that is independent of N ′, and not the
covariance matrix.
Now the linear estimator Zk that minimizes the MSE with Uk is
Zk = X Wk (A.21)
= U G (GT G)−1Gk (A.22)
Appendix A. Memory function of LTI systems 92
With this expression (A.22) we can evaluate the memory function m(k)
m(k) =cov2(Zk, Uk)
σ2(Uk)σ2(Zk)(A.23)
=
(1
N−k [Zk − µ]T Uk
)2
(13
) (1
N−k [Zk − µ]T [Zk − µ]) (A.24)
where we will now assume that the response functions on all nodes have zero mean and
thus µ = E(zk) = 0. This is experimentally confirmed for our slab system, since the
impulse responses have zero mean on every node. Now (A.24) with Zk in the form of
(A.22) expands to
m(k) = γ−1 [GTk (GT G)−1
A.16︷ ︸︸ ︷GT UT Uk]
2
GTk (GT G)−1GT UT U︸ ︷︷ ︸=γIN
G (GTG)−1Gk(A.25)
= γ−2 [GTk (GT G)−1γ Gk]2
GTk (GT G)−1 GT G (GT G)−1Gk(A.26)
=[GTk (GT G)−1Gk]
2
GTk (GT G)−1Gk(A.27)
We finally obtained a form in (A.26) that is independent of input U and is only function
of the system’s impulse responses matrix G. We obtain the final form
m(k) = GTk (GT G)−1Gk (A.28)
We recognize in this form the expression for memory function of White [8] for linear
discrete time systems, where s(n) is an impulse.
Bibliography
[1] Herbert Jaeger. The ”echo state” approach to analysing and training recurrent
neural networks. Tech. rep. 148. GMD - German National Research Center for
Information Technology., pages 1–47, 2001.
[2] Wolfgang Maass, Thomas Natschlager, and Henry Markram. Real-time computing
without stable states: a new framework for neural computation based on perturba-
tions. Neural computation, 14(11):2531–60, November 2002.
[3] J.J. Steil. Backpropagation-decorrelation: online recurrent learning with O(N) com-
plexity. In Proceedings of IJCNN 04, volume 2, pages 843–848. Ieee, 2004. ISBN
0-7803-8359-1. doi: 10.1109/IJCNN.2004.1380039.
[4] D Verstraeten, B Schrauwen, M D’Haene, and D Stroobandt. An experimental
unification of reservoir computing methods. Neural networks, 20(3):391–403, April
2007.
[5] Herbert Jaeger and Harald Haas. Harnessing nonlinearity: predicting chaotic sys-
tems and saving energy in wireless communication. Science (New York, N.Y.), 304
(5667):78–80, April 2004.
[6] D. Verstraeten, B. Schrauwen, D. Stroobandt, and J. Van Campenhout. Isolated
word recognition with the Liquid State Machine: a case study. Information Pro-
cessing Letters, 95(6):521–528, September 2005.
[7] H Jaeger. Short term memory in echo state networks. Technical report, 2002.
[8] Olivia White, Daniel Lee, and Haim Sompolinsky. Short-Term Memory in Orthog-
onal Neural Networks. Physical Review Letters, 92(14):148102, April 2004.
[9] Michiel Hermans and Benjamin Schrauwen. Memory in linear recurrent neural net-
works in continuous time. Neural networks : the official journal of the International
Neural Network Society, 23(3):341–55, April 2010.
[10] Joni Dambre, David Verstraeten, Benjamin Schrauwen, and Serge Massar. Infor-
mation processing capacity of dynamical systems. Sci. Rep., 2:514, January 2012.
93
Bibliography 94
[11] Helmut Hauser, Auke J Ijspeert, Rudolf M Fuchslin, Rolf Pfeifer, and Wolfgang
Maass. Towards a theoretical foundation for morphological computation with com-
pliant bodies. Biological cybernetics, (2011):355–370, January 2012.
[12] Rolf Pfeifer, Max Lungarella, and Fumiya Iida. Self-organization, embodiment,
and biologically inspired robotics. Science (New York, N.Y.), 318(5853):1088–93,
November 2007.
[13] Jeremy a Fishel and Gerald E Loeb. Bayesian exploration for intelligent identifica-
tion of textures. Frontiers in neurorobotics, 6(June):4, January 2012.
[14] Roland S Johansson and J Randall Flanagan. Coding and use of tactile signals
from the fingertips in object manipulation tasks. Nature reviews. Neuroscience, 10
(5):345–59, May 2009.
[15] M Hollins and S R Risner. Evidence for the duplex theory of tactile texture per-
ception. Perception & psychophysics, 62(4):695–705, May 2000.
[16] Sliman Bensmaıa and Mark Hollins. Pacinian representations of fine surface texture.
Perception & psychophysics, 67(5):842–54, July 2005.
[17] Vincent Hayward. Is there a ’plenhaptic’ function? Philosophical transactions
of the Royal Society of London. Series B, Biological sciences, 366(1581):3115–22,
November 2011.
[18] J Scheibert, S Leurent, A. Prevost, and G. Debregeas. The role of fingerprints in
the coding of tactile information probed with a biomimetic sensor. Science . . . , 323
(March):1503–1506, 2009.
[19] Chrisantha Fernando and Sampsa Sojakka. Pattern recognition in a bucket. Ad-
vances in Artificial Life, 2003.
[20] Kristof Vandoorne, Wouter Dierckx, Benjamin Schrauwen, David Verstraeten, Roel
Baets, Peter Bienstman, and Jan Van Campenhout. Toward optical signal process-
ing using photonic reservoir computing. Optics express, 16(15):11182–92, July 2008.
[21] Y Paquot, F Duport, a Smerieri, J Dambre, B Schrauwen, M Haelterman, and
S Massar. Optoelectronic reservoir computing. Scientific reports, 2:287, January
2012.
[22] L Larger, M C Soriano, D Brunner, L Appeltant, J M Gutierrez, L Pesquera,
C R Mirasso, and I Fischer. Photonic information processing beyond Turing: an
optoelectronic implementation of reservoir computing. Optics express, 20(3):3241–
9, January 2012.
Bibliography 95
[23] Kyran Dale and Phil Husbands. The evolution of reaction-diffusion controllers for
minimally cognitive agents. Artificial life, 16(1):1–19, January 2010.
[24] Ken Caluwaerts, Michiel D’Haene, David Verstraeten, and Benjamin Schrauwen.
Locomotion without a brain: physical reservoir computing in tensegrity structures.
Artificial life, 19(1), 2013.
[25] Thomas Weyn. Het gebruik en de optimalisatie van een massa-veer-dempersysteem
als fysiek reservoir. PhD thesis, 2012.
[26] K J Bathe. Finite element procedures. Prentice Hall, 1996. ISBN 9780133014587.
[27] O C Zienkiewicz and R L Taylor. The Finite Element Method: Solid Mechan-
ics. Number v. 2 in Finite Element Method Series. Oxford [etc.] : Butterworth
Heinemann, fifth edit edition, 2000. ISBN 9780750650557.
[28] K Ho-Le. Finite element mesh generation methods: a review and classification.
Computer-aided design, 1988.
[29] Guido Dhondt. Calculix CrunchiX User’s Manual version 2.5. 2.5 edition, 2012.
[30] Friedrich Moser, Laurence J. Jacobs, and Jianmin Qu. Modeling elastic wave prop-
agation in waveguides with the finite element method. NDT & E International, 32
(4):225–234, June 1999.
[31] Sondipon Adhikari. Damping models for structural vibration. Cambridge Univer-
sity, 2000.
[32] S Adhikari. Damping modelling and identification using generalized proportional
damping. Proceedings of the 23rd International Modal . . . , 2005.
[33] DS Simulia. Abaqus 6.11 Manual / Getting started with Abaqus: Interactive Edi-
tion, 2011.
[34] Dassault Systemes. Dassault Systemes Completes the Acquisition of ABAQUS Inc.
and Introduces the SIMULIA Brand, 2005. URL http://www.3ds.com/company/
news-media/press-releases-detail/release//single/965/?no_cache=1.
[35] E Volterra and E C Zachmanoglou. Dynamics of vibrations. Number v. 1 in Dy-
namics of Vibrations. C.E. Merrill Books, 1965.
[36] Matt Pharr and Greg Humphreys. Chapter 7. Sampling and Reconstruction. In
Physically Based Rendering: From Theory to Implementation, pages 279–367. Mor-
gan Kaufmann Publishers Inc., 2004. ISBN 012553180X.
List of Figures 96
[37] M Schetzen. The Volterra and Wiener theories of nonlinear systems. A Wiley -
Interscience publication. Wiley, 1980. ISBN 9780471044550.
[38] G. Kerschen, M. Peeters, J.C. Golinval, and a.F. Vakakis. Nonlinear normal modes,
Part I: A useful framework for the structural dynamicist. Mechanical Systems and
Signal Processing, 23(1):170–194, January 2009.
[39] Behnam Salimbahrami and Boris Lohmann. Structure preserving order reduction
of large scale second order systems. . . . Symposium on Large Scale Systems: . . . ,
2004.
[40] B. Moore. Principal component analysis in linear systems: Controllability, observ-
ability, and model reduction. IEEE Transactions on Automatic Control, 26(1):
17–32, February 1981.
[41] C M Bishop. Pattern Recognition and Machine Learning. Information Science and
Statistics. Springer, 2006. ISBN 9780387310732.
List of Figures
2.1 2D Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Stress and strain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 FEM Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 1D interpolation functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 pyFormex Helix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 NaN bug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Slab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Example of input and output of slab . . . . . . . . . . . . . . . . . . . . . 29
3.3 Loading profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Wiener noise and spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 Workflow of framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.6 Displacement in the y-direction at the central top node . . . . . . . . . . 41
3.7 τd versus β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.8 Convergence signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.9 Different mesh sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.10 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.11 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1 Four first eigenmodes of slab . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Eigenfrequencies of slab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3 Transient response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4 Buildup of profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5 Frequency profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.6 RMS profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.7 RMS profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.8 RMS profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.9 Geometrical nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.10 Geometrical nonlinearity instability . . . . . . . . . . . . . . . . . . . . . . 59
4.11 Additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.12 Error on addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.13 Transfer function estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.14 Convolution substitute for simulation . . . . . . . . . . . . . . . . . . . . . 65
5.1 Memory function of the slab wave system . . . . . . . . . . . . . . . . . . 69
5.2 Memory function of the slab wave system . . . . . . . . . . . . . . . . . . 70
5.3 Frequency coding of discrete signals . . . . . . . . . . . . . . . . . . . . . 72
5.4 Average transfer function . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
97
List of Figures 98
5.5 Transfer function as function of position . . . . . . . . . . . . . . . . . . . 75
5.6 Transfer function colorplots . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.7 Training RMS profiles for f=300 Hz . . . . . . . . . . . . . . . . . . . . . 78
5.8 Test set input frequency f(t) with hold time Th = 0.4 s and detectedfrequency z(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.9 Detection times for different frequencies . . . . . . . . . . . . . . . . . . . 80
5.10 Detection time in function of noise . . . . . . . . . . . . . . . . . . . . . . 80
5.11 Detection times for eigenfrequencies . . . . . . . . . . . . . . . . . . . . . 81
5.12 Detection time in function of damping . . . . . . . . . . . . . . . . . . . . 82
5.13 Memory function for different τd . . . . . . . . . . . . . . . . . . . . . . . 84
5.14 Memory capacity in function of τd . . . . . . . . . . . . . . . . . . . . . . 84
List of Tables
3.1 Overview of the standard simulation parameters . . . . . . . . . . . . . . 28
3.2 Comparison of computation time for the slab simulations (observed) andthe contact simulations (estimated). . . . . . . . . . . . . . . . . . . . . . 33
3.3 SI units and the corresponding FEA units. . . . . . . . . . . . . . . . . . . 34
3.4 The material properties in SI units and FEA units. . . . . . . . . . . . . . 34
3.5 Convergence experiments according to number of elements. . . . . . . . . 47
4.1 Table with the first 60 eigenfrequencies. . . . . . . . . . . . . . . . . . . . 52
5.1 Comparison: Direct encoding of signal versus frequency coding. . . . . . . 73
99