alignment and algorithms scoring matrices: models of evolution dynamic program: speed pairwise...

28
Alignment and Algorithms •Scoring Matrices: models of evolution •Dynamic Program: speed •Pairwise Sequence Alignment, variants •Significance •Pattern recognition, Hidden Markov Models (HMM) •Multiple Sequence Alignment (MSA)

Post on 22-Dec-2015

228 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Alignment and Algorithms

•Scoring Matrices: models of evolution

•Dynamic Program: speed

•Pairwise Sequence Alignment, variants

•Significance

•Pattern recognition, Hidden Markov Models (HMM)

•Multiple Sequence Alignment (MSA)

•Protein families

•Structure features

Page 2: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring Models and Evolution• Several systems of scoring matrices.• Statistically based, curated data: I.e.,

based on confident alignments of multiple sequences.

Two best known: PAM matrices: Dayhoff et al., 1967BLOSUM matrices: Henikoff & Henikoff,

1992

Page 3: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

Recall: using p(a,b) = prob. that a (amino acid residue) mutated into b.

PAM (Point Accepted Mutation) uses sequence data for proteins which can be aligned with 99% identical sequence. In such aligned sequences, one counts the frequency of each residue, and of each pair of residues in a “column”.

Page 4: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

Basically, the score s(a,b) from this data is a “LOD score”:

s(a,b) = log(p(a,b)/p(a)p(b)).

Comparison of two “models”: significant association vs. independent randomness.

Pruned: scaled to make matrices integer-valued.

Page 5: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

Interpretation: Dayhoff, et al. define a short unit of evolutionary time, a PAM unit. Only very conservative changes will be seen in the composition of evolving proteins in this scale.

To extend to more distant similarities, used time independent Markov assumption.

Page 6: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

p(a,b) = p(a-->b|t=1) gives the transition probability from a --> b (or b --> a: we cannot keep track of directionality) in one unit of PAM time.

The time independent Markov model assumes p(a-->b|t=2) in two units is simply the sum over all pairs of intermediate one time unit transitions: a --> c followed by c --> b

Page 7: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

If P(1) is the original 20x20 matrix of amino acid transition probabilities, then the above says simply

P(n) = P(1) x P(1)x…x P(1) (n times) (matrix multiplication)

Page 8: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

So, PAM(n) matrix of LOD scores from transitions in PAM time n.

Available: n=1 to n=500: smaller n for more closely related sequences.

Most commonly used: 50 to 250.

Critique: accurate for small time, goes off at larger times. Compare nucleic acid changes to amino acid changes.

Page 9: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

PAM 250 MATRIXA R N D C Q E G H I L K M F P S T W Y V

A 2 -2 0 0 -2 0 0 1 -1 -1 -2 -1 -1 -3 1 1 1 -6 -3 0

R -2 6 0 -1 -4 1 -1 -3 2 -2 -3 3 0 -4 0 0 -1 2 -4 -2

N 0 0 2 2 -4 1 1 0 2 -2 -3 1 -2 -3 0 1 0 -4 -2 -2

D 0 -1 2 4 -5 2 3 1 1 -2 -4 0 -3 -6 -1 0 0 -7 -4 -2

C -2 -4 -4 -5 12 -5 -5 -3 -3 -2 -6 -5 -5 -4 -3 0 -2 -8 0 -2

Q 0 1 1 2 -5 4 2 -1 3 -2 -2 1 -1 -5 0 -1 -1 -5 -4 -2

E 0 -1 1 3 -5 2 4 0 1 -2 -3 0 -2 -5 -1 0 0 -7 -4 -2

G 1 -3 0 1 -3 -1 0 5 -2 -3 -4 -2 -3 -5 0 1 0 -7 -5 -1

H -1 2 2 1 -3 3 1 -2 6 -2 -2 0 -2 -2 0 -1 -1 -3 0 -2

I -1 -2 -2 -2 -2 -2 -2 -3 -2 5 2 -2 2 1 -2 -1 0 -5 -1 4

L -2 -3 -3 -4 -6 -2 -3 -4 -2 2 6 -3 4 2 -3 -3 -2 -2 -1 2

K -1 3 1 0 -5 1 0 -2 0 -2 -3 5 0 -5 -1 0 0 -3 -4 -2

M -1 0 -2 -3 -5 -1 -2 -3 -2 2 4 0 6 0 -2 -2 -1 -4 -2 2

F -3 -4 -3 -6 -4 -5 -5 -5 -2 1 2 -5 0 9 -5 -3 -3 0 7 -1

P 1 0 0 -1 -3 0 -1 0 0 -2 -3 -1 -2 -5 6 1 0 -6 -5 -1

S 1 0 1 0 0 -1 0 1 -1 -1 -3 0 -2 -3 1 2 1 -2 -3 -1

T 1 -1 0 0 -2 -1 0 0 -1 0 -2 0 -1 -3 0 1 3 -5 -3 0

W -6 2 -4 -7 -8 -5 -7 -7 -3 -5 -2 -3 -4 0 -6 -2 -5 17 0 -6

Y -3 -4 -2 -4 0 -4 -4 -5 0 -1 -1 -4 -2 7 -5 -3 -3 0 10 -2

V 0 -2 -2 -2 -2 -2 -2 -1 -2 4 2 -2 2 -1 -1 -1 0 -6 -2 4

Page 10: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Scoring matrices, cont.

BLOSUM Matrices

From the BLOCKS database.BLOSUM = BLOCKS Substitution MatrixDB of aligned sequence. Conserved regions are

marked. The BLOSUM(L) transition probabilities are figured from

sequences with L% identity of sequence. Score is a LOD score again. So, large BLOSUM L <--> small PAM n.

Warning: corrections for oversampling of closely related sequence!

Page 11: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

If the BLOSUM62 matrix is compared to PAM160 (it's closest equivalent) then it is found that the BLOSUM matrix is less tolerant of substitutions to or from hydrophilic amino acids, while more tolerant of hydrophobic changes and of cysteine and tryptophan mismatches.

PAM is less sensitive to the effect of amino acid “return” substitutions, over long time periods.

Scoring matrices, cont.

Page 12: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

BLOSUM62 Matrix

BLOSUM62 Substitution Matrix

C S T P A G N D E Q H R K M I L V F Y W C 9 -1 -1 -3 0 -3 -3 -3 -4 -3 -3 -3 -3 -1 -1 -1 -1 -2 -2 -2 S -1 4 1 -1 1 0 1 0 0 0 -1 -1 0 -1 -2 -2 -2 -2 -2 -3 T -1 1 4 1 -1 1 0 1 0 0 0 -1 0 -1 -2 -2 -2 -2 -2 -3 P -3 -1 1 7 -1 -2 -1 -1 -1 -1 -2 -2 -1 -2 -3 -3 -2 -4 -3 -4 A 0 1 -1 -1 4 0 -1 -2 -1 -1 -2 -1 -1 -1 -1 -1 -2 -2 -2 -3 G -3 0 1 -2 0 6 -2 -1 -2 -2 -2 -2 -2 -3 -4 -4 0 -3 -3 -2 N -3 1 0 -2 -2 0 6 1 0 0 -1 0 0 -2 -3 -3 -3 -3 -2 -4 D -3 0 1 -1 -2 -1 1 6 2 0 -1 -2 -1 -3 -3 -4 -3 -3 -3 -4 E -4 0 0 -1 -1 -2 0 2 5 2 0 0 1 -2 -3 -3 -3 -3 -2 -3 Q -3 0 0 -1 -1 -2 0 0 2 5 0 1 1 0 -3 -2 -2 -3 -1 -2 H -3 -1 0 -2 -2 -2 1 1 0 0 8 0 -1 -2 -3 -3 -2 -1 2 -2 R -3 -1 -1 -2 -1 -2 0 -2 0 1 0 5 2 -1 -3 -2 -3 -3 -2 -3 K -3 0 0 -1 -1 -2 0 -1 1 1 -1 2 5 -1 -3 -2 -3 -3 -2 -3 M -1 -1 -1 -2 -1 -3 -2 -3 -2 0 -2 -1 -1 5 1 2 -2 0 -1 -1 I -1 -2 -2 -3 -1 -4 -3 -3 -3 -3 -3 -3 -3 1 4 2 1 0 -1 -3 L -1 -2 -2 -3 -1 -4 -3 -4 -3 -2 -3 -2 -2 2 2 4 3 0 -1 -2 V -1 -2 -2 -2 0 -3 -3 -3 -2 -2 -3 -3 -2 1 3 1 4 -1 -1 -3 F -2 -2 -2 -4 -2 -3 -3 -3 -3 -3 -1 -3 -3 0 0 0 -1 6 3 1 Y -2 -2 -2 -3 -2 -3 -2 -3 -2 -1 2 -2 -2 -1 -1 -1 -1 3 7 2 W -2 -3 -3 -4 -3 -2 -4 -4 -3 -2 -2 -3 -3 -1 -3 -2 -3 1 2 11

Page 13: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming

Problem: find (1) best score and (2) best alignment

Naïve: evaluate each alignment and compare

Too many alignments!

Need more speed: DP

Page 14: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

No. of alignments is exponentially large.

Use: problem has a linear (left to right) structure to it, from the linearity of proteins, nucleic acids.

Analogous to the problem of distance; originally arose in CS, “edit distance”.

Page 15: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

The basic algorithms here are:

(1) Needleman-Wunsch (global alignment)

(2) Smith-Waterman (local alignment)

[BLAST is a heuristic version of SW.]

(3) Many variants.

Page 16: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

NW algorithm can be understood pictorially.

It has three basic steps:• Initialization• Iteration• Traceback

Basic insight: you only have to optimize the score one step at a time, and can discard looking at most alignments.

Page 17: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

Missing scoring feature: gap penalties.

Not as well understood as substitutions. Use “geometric random variable models”.

s(a,-) is independent of a, say s(a,-) = -d = “gap penalty”

For longer gaps, two methods:(a) linear: k gaps in a row gives penalty k x (-d).(b) affine: k gaps in a row gives penalty

e + (k-1) x (-d),Where e is the “gap initiation penalty”.

Page 18: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

Notice: affine gap does not just depend on the position, since you have to know whether you are continuing a gap from the left. First example of dependency in scoring.

Key to the iterative step is to realize there are only three ways to extend an alignment:

R1 R1 - r1 - r1

We have to update the score by taking the max of three possibilities: max(s0 + s(R1,r1), s0 + s(R1,-), s0+ s(-,r1)).

Then to get the scores of all possible subalignments takes justabout 5 MN computations, instead of an exponential number.

Page 19: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

To derive the actual alignment requires storage: one stores a traceback arrow from each position in the comparison array.

This information can be translated into the correct alignment.

Variants:

Smith-Waterman: local alignment. Adjustment is to 0-out and start over again any time a NW score goes below zero, and procede. One chooses the largest scoreof a segment, and traces back as earlier.

Linear storage: saves storage.

Repeats: allows for duplication of segments.

Page 20: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Dynamic Programming, cont.

Variants (cont.):

Non-overlapping: Repeats SW, avoiding previous best alignments.

Sampling of highest hits: the best alignment score maybe an artifact of our scoring system, so many alignments may be high scoring.

Affine gap penalties: actually have to have more states carried along, a precursor to Hidden Markov Models.

Page 21: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Significance

E-value in BLAST discussed last time. Equivalent to statistical concept of a p-value.

S =score for best alignment is a random variable.

The p-value for an alignment is the probability that such a high score could have been found randomly.

Page 22: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Significance, cont.

This can be estimated two ways:

Ungapped local alignment: Karlin-Altschul statistics.

Generally, KA not available, rely instead on simulations: take two sequences A and B, and randomly “shuffle” one and recompare. This eliminates composition bias in the teest samples, but randomizes the rest of the biological information in the sequence, namely the order of the residues (or nt’s). SW computes this for 100’s of shuffles.

Page 23: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

HMM’s

These are a special case of a general CS problem: pattern recognition, or machine learning. Can one make the machine a master at recognizing or classifying certain things. There are a large family of such graphical models, this is one of the simplest.

Basic problem in MB: can one recognize different functional or structural regions in a protein from it’s a.a. sequence? It’s fold? Or less specifically, what are the common features of a group of proteins.

Page 24: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

HMM’s, cont.

For example, there exist very many G-protein coupled receptors, which fall into seven classes. Can one classify a new protein into (a) being a GPCR, and (b) assigning its class to it from sequence data alone?

Simplest example problem: dishonest casino. You are given a sequence of observed random variables or states of the system (faces of a die), but you don’t see hidden states (e.g., whether a casino is using a fair or biased die). Notice that this kind of example has a linear structure to it, like our a.a. sequences.

Page 25: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

HMM’s, cont.

The problem is: given an observed path of (observable states) determine the underlying path through the hidden states. I.e., given the record of the coin flips, say when the biased coin was used and when the fair. Notice that in this model we know how many visible states there are and we are assuming we know how many hidden states there are.

Of course, we cannot know with cerrtainty when the fair/biased coin is used, we must seek a probable answer, and the algorithms show how to choose a most probable path through the model.

The harder problem (biological problem) is when we do not know what the hidden states are, except where we can define them, e.g., structurally. For example, you could run through protein a.a. sequence and ask a HMM to decide which residues are situated in alpha helices of the protein, or when they lie in the cell membrane.

Page 26: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

HMM’s, cont.

In the first problem, if we know all the parameters, meaning mainly the transition problems for the Hidden Markov states, then it is easy to find a dynamic programming algorithm for finding the best path. This is the Viterbi algorithm.

[Sometimes called parsing the hidden markov model, because the method originated in speech recognition work.]

The harder problem is to figure out what the parameters are for the hidden states. This is called training the model, and is done by the Baum-Welch algorithm. This is a version of the so-called EM method, or algorithm (Expectation Maximization algorithm).

BW, or EM, is a heuristic optimization algorithm. That is, it starts from a given initial guess for the parameters involved and performs a calculation which is guaranteed to improve (make no worse, actually) the previous guess. One runs for a certain time and turns things off by a suitable threshold.

Page 27: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

HMM’s, cont.

Finally, a harder problem still is to “learn” the correct architecture of the model. There are packages for this in the case of protein families/ multiple sequence alignments.

In the case of structure, for example, one sees a classical problem in statistics: one can let the data tell directly what the patterns should be (“unsupervised learning”) or you could use expertise to guide the data. This allows fainter structural signals to be “heard”. Problem: is this fittingthe data too tightly, making the model generated harder to be generally applicable? Example of TMHMM and unusual, but necessary faint signals.

Page 28: Alignment and Algorithms Scoring Matrices: models of evolution Dynamic Program: speed Pairwise Sequence Alignment, variants Significance Pattern recognition,

Further Notes are available at:

http://www.math.lsa.umich.edu/~dburns/547/547syll.html

You can link from there to lectures on sequence alignment and hidden Markov models.