probabilities and probabilistic models

168
www.bioalgorithms. info An Introduction to Bioinformatics Algorithms Probabilities and Probabilistic Models

Upload: tyrone

Post on 25-Jan-2016

42 views

Category:

Documents


2 download

DESCRIPTION

Probabilities and Probabilistic Models. Probabilistic models. A model means a system that simulates an object under consideration. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Probabilities and Probabilistic Models

www.bioalgorithms.infoAn Introduction to Bioinformatics Algorithms

Probabilities and Probabilistic Models

Page 2: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Probabilistic models

• A model means a system that simulates an object under consideration.

• A probabilistic model is a model that produces different outcomes with different probabilities – it can simulate a whole class of objects, assigning each an associated probability.

• In bioinformatics, the objects usually are DNA or protein sequences and a model might describe a family of related sequences.

Page 3: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Examples1. The roll of a six-sided die – six parameters p1, p2, …, p6,

where pi is the probability of rolling the number i. For probabilities, pi > 0 and

2. Three rolls of a die: the model might be that the rolls are independent, so that the probability of a sequence such as [2, 4, 6] would be p2 p4 p6.

3. An extremely simple model of any DNA or protein sequence is a string over a 4 (nucleotide) or 20 (amino acid) letter alphabet. Let qa denote the probability, that residue a occurs at a given position, at random, independent of all other residues in the sequence. Then, for a given length n, the probability of the sequence x1,x2,…,xn is

n

ixn i

qxxP1

1 ),,(

1i

ip

Page 4: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Conditional, joint, and marginal probabilities• two dice D1 and D2. For j = 1,2, assume that the

probabi-lity of using die Dj is P(Dj ), and for i = 1,2,, 6, assume that the probability of rolling an i with dice Dj is

• In this simple two dice model, the conditional probability of rolling an i with dice Dj is:

• The joint probability of picking die Dj and rolling an i is:

• The probability of rolling i – marginal probability

.)(iPjD

.)()|( iPDiPjDj

.)|()(),( jjj DiPDPDiP

2

1

2

1

)|()(),()(j j

jjj DiPDPDiPiP

Page 5: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Maximum likelihood estimation

• Probabilistic models have parameters that are usually estimated from large sets of trusted examples, called a training set.

• For example, the probability qa for seeing amino acid a in a protein sequence can be estimated as the observed frequency fa of a in a database of known protein sequences, such as SWISS-PROT.

• This way of estimating models is called Maximum likelihood estimation, because it can be shown that using the observed frequencies maximizes the total probability of the training set, given the model.

• In general, given a model with parameters and a set of data D, the maximum likelihood estimate (MLE) for is the value which maximizes P(D | ).

Page 6: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Model comparison problem

• An occasionally dishonest casino uses two kinds of dice, of which 99% are fair, but 1% are loaded, so that a 6 appears 50% of the time.

• We pick up a dice and roll [6, 6, 6]. This looks like a loaded die, is it? This is an example of a model comparison problem.

• I.e., our hypothesis Dloaded is that the die is loaded. The other alternative is Dfair. Which model fits the observed data better? We want to calculate:

P(Dloaded | [6, 6, 6])

Page 7: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Prior and posterior probability

• P(Dloaded | [6, 6, 6]) is the posterior probability that the dice is loaded, given the observed data.

• Note that the prior probability of this hypothesis is 1/100 – prior because it is our best guess about the dice before having seen any information about the it.

• The likelihood of the hypothesis Dloaded:

• Posterior probability – using Bayes’ theorem

8

1

2

1

2

1

2

1)|]6,6,6([ loadedDP

)(

)()|()|(

YP

XPXYPYXP

Page 8: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Comparing models using Bayes’ theorem

• The probability P(Dloaded) of picking a loaded die is 0.01.

• The probability P([6, 6, 6] | Dloaded) of rolling three sixes using a loaded die is 0.53 = 0.125.

• The total probability P([6, 6, 6]) of three sixes isP([6, 6, 6] | Dloaded) P(Dloaded) + P([6, 6, 6] | Dfair)P(Dfair).

• Now,

• Thus, the die is probably fair.

])6,6,6([

)()|]6,6,6([])6,6,6[|(

P

DPDPDP loadedloaded

loaded

21.0)99.0()()01.0)(5.0(

)01.0)(5.0(])6,6,6[|(

3613

3

loadedDP

• We set X = Dloaded and Y = [6, 6, 6], thus obtaining

Page 9: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Biological example

• Lets assume that extracellular (ext) proteins have a slightly different composition than intercellular (int) ones. We want to use this to judge whether a new protein sequence x1,…, xn is ext or int.

• To obtain training data, classify all proteins in SWISS-PROT into ext, int and unclassifiable ones.

• Determine the frequencies and of each amino acid a in ext and int proteins, respectively.

• To be able to apply Bayes’ theorem, we need to determine the priors pint and pext, i.e. the probability that a new (unexamined) sequence is extracellular or intercellular, respectively.

extaf int

af

Page 10: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Biological example - cont.

• We have: and

• If we assume that any sequence is either extracellular or intercellular, then we have

P(x) = pext P(x | ext) + pintP(x | int).• By Bayes’ theorem, we obtain

the posterior probability that a sequence is extracellular.• (In reality, many transmembrane proteins have both

intra- and extracellular components and more complex models such as HMMs are appropriate.)

n

i

ntix

n

i

extx ii

qntixPqextxP11

)|( )|(

i i

intx

intextx

exti

extx

ext

ii

i

qpqp

qp

xP

extxPextPxextP

)(

)|()()|(

Page 11: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Probability vs. likelihoodpravdepodobnosť vs. vierohodnosť

• If we consider P( X | Y ) as a function of X, then this is called a probability.

• If we consider P( X | Y ) as a function of Y , then this is called a likelihood.

Page 12: Probabilities and Probabilistic Models

www.bioalgorithms.infoAn Introduction to Bioinformatics Algorithms

Sequence comparison by compression

Page 13: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motivation

• similarity as a marker for homology. And homology is used to infer function.

• Sometimes, we are only interested in a numerical distance between two sequences. For example, to infer a phylogeny.

Figure adapted from http://www.inf.ethz.ch/personal/gonnet/acat2000/side2.html

Page 14: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Text- vs DNA compression

• compress, gzip or zip – routinely used to compress text files. They can be applied also to a text file containing DNA.

• E.g., a text file F containing chromosome 19 of human in fasta format – |F|= 61 MB, but |compress(F)|= 8.5 MB.

• 8 bits are used for each character. However, DNA consists of only 4 different bases – 2 bits per base are enough: A = 00, C = 01, G = 10, and T = 11.

• Applying a standard compression algorithm to a file containing DNA encoded using two bits per base will usually not be able to compress the file further.

Page 15: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The repetitive nature of DNA

• Take advantage of the repetitive nature of DNA!!• LINEs (Long Interspersed Nuclear Elements),

SINEs.

• UCSC Genome Browser: http://genome.ucsc.edu

Page 16: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

DNA compression

• DNA sequences are very compressible, especially for higher eukaryotes: they contain many repeats of different size, with different numbers of instances and different amounts of identity.

• A first idea: While processing the DNA string from left to right, detect exact repeats and/or palindromes (reverse-complemented repeats) that possess previous instances in the already processed text and encode them by the length and position of an earlier occurrence. For stretches of sequence for which no significant repeat is found, use two-bit encoding. (The program Biocompress is based on this idea.)• Data structure for fast access to sequence patterns already

encountered.• Sliding window along unprocessed sequence.

Page 17: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

DNA compression

• A second idea: Build a suffix tree for the whole sequence and use it to detect maximal repeats of some fixed minimum size. Then code all repeats as above and use two-bit encoding for bases not contained inside repeats. (The program Cfact is based on this idea.)

• Both of these algorithms are lossless, meaning that the original sequences can be precisely reconstructed from their encodings. An number of lossy algorithms exist, which we will not discuss here.

• In the following we will discuss the GenCompress algorithm due to Xin Chen, Sam Kwong and Ming Li.

Page 18: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Edit operations

• The main idea of GenCompress is to use inexact matching, followed by edit operations. In other words, instances of inexact repeats are encoded by a reference to an earlier instance of the repeat, followed by some edit operations that modify the earlier instance to obtain the current instance.

• Three standard edit operations:1. Replace: (R, i, char) – replace the character at position i

by character char.2. Insert: (I, i, char) – insert character char at position i.3. Delete: (D, i) – delete the character at position i.

Page 19: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Edit operations

• different edit operation sequences:• (a) C C C C R C C C C C or (b) C C C C D C I C C C C

g a c c g t c a t t g a c c g t c a t t g a c c t t c a t t g a c c t t c a t t

• infinite number of ways to convert one string into another.• Given two strings q and p. An edit transcript (q, p) is a list of edit

operations that transforms q into p.• E.g., in case (a) the edit transcript is:

(gaccgtcatt,gaccttcatt) = (R, 4, t),• whereas in case (b) it is:

(gaccgtcatt,gaccttcatt) = (D, 4), (I, 4, g).• (positions start at 0 and are given relative to current state of the• string, as obtained by application of any preceding edit

operations.)

Page 20: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Encoding DNA

1. Using the two-bit encoding method, gaccttcatt can be encoded in 20 bits:

10 00 01 01 11 11 01 00 11 11

The following three encode gaccttcatt relative to gaccgtcatt:

2. In the exact matching method we use a pair of numbers (repeat − position, repeat − length) to represent an exact repeat. We can encode gaccttcatt as (0, 4), t, (5, 5), relative to gaccgtcatt. Let 4 bits encode an integer, 2 bits encode a base and one bit to indicate whether the next part is a pair (indicating a repeat) or a base. We obtain an encoding in 21 bits:

0 0000 0100 1 11 0 0101 0101.

Page 21: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Encoding DNA

3. In the approximate matching method we can encode gaccttcatt as (0, 10), (R, 4, t) relative to gaccgtcatt. Let use encode R by 00, I by 01, D by 11 and use a single bit to indicate whether the next part is a pair or a triplet. We obtain an encoding in 18 bits:

0 0000 1010 1 00 0100 113. For approximate matching, we could also use the edit

sequence (D, 4), (I, 4, t), for example, yielding the relative encoding (0, 10), (D, 4), (I, 4, t), which uses 25 bits:

0 0000 1010 1 11 0100 1 01 0100 11.

Page 22: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

GenCompress:

• a one-pass algorithm based on approximate matching• For a given input string w, assume that a part v has already

been compressed and the remaining part is u, with w = vu. The algorithm finds an “optimal prefix” p of u that approximately matches some substring of v such that p can be encoded economically. After outputting the encoding of p, remove the prefix p from u and append it to v. If no optimal prefix is found, output the next base and then move it from u to v. Repeat until u = .w

v u

p’ p

Page 23: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The condition C

• How do we find an “optimal prefix” p? The following condition will be used to limit the search.

• Given two numbers k and b. Let p be a prefix of the unprocessed part u and q a substring of the processed part v. If |q| > k, then any transcript (q, p) is said to satisfy the condition C = (k, b) for compression, if its number of edit operations is b.

• Experimental studies indicate that C = (k, b) = (12, 3) gives good results.

• In other words, when attempting to determine the optimal prefix for compression, we will only consider repeats of length k that require at most b edit operations.

Page 24: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The compression gain function

• We define a compression gain function G to determine whether a particular approximate repeat q, p and edit transcript are beneficial for the encoding:

• G(q, p, ) = max {2|p| − |(i, |q|)| − w · |(q, p)| − c, 0}.• where

• p is a prefix of the unprocessed part u,• q is a substring of the processed part v of length |q| that

starts at position i,• 2|p| is the number of bits that the two-bit encoding would

use,• |(i, |q|)| is the encoding size of (i, |q|),• w is the cost of encoding an edit operation,• |(q, p)| is the number of edit operations in (q, p),• and c is the overhead proportional to the size of control bits.

Page 25: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The GenCompress algorithm

Input: A DNA sequence w, parameter C = (k, b)Output: A lossless compressed encoding of wInitialization: u = w and v = while u do

Search for an optimal prefix p of uif an optimal prefix p with repeat q in v is found then

Encode the repeat representation (i, |q|), where i isthe position of q in v, together with the shortest edit

transcript (q, p). Output the code.

else Set p equal to the first character of u,encode and output it.

Remove the prefix p from u and append it to vend

Page 26: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Implementing the optimal prefix search• search for the optimal prefix – too slow

• Lemma: An optimal prefix p always ends right before a mismatch.

• Lemma: Let (q, p) be an optimal edit sequence from q to p. If qi is copied onto pj in, then restricted to (q0:i, p0:j) is an optimal transcript from q0:i = q0q1 . . . qi to p0:j = p0p1 . . . , pj .

• simplified as follows: • to find an approximate match for p in v, we look for an

exact match of the first l bases in p, where l is a fixed small number

• an integer is stored at each position i of v that is determined by the the word of length l starting at i.

Page 27: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Implementing the optimal prefix search1. Let w = vu where v has already been compressed.2. Find all occurrences u0:l in v, for some small l. For

each such occurrence, try to extend it, allowing mismatches, limited by the above observations and condition C. Return the prefix p with the largest compression gain value G.

Page 28: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Performance of GenCompress

• any nucleotide can be encoded canonically using 2 bits, we define the compression ratio of a compression algorithm as

|I| is the number of bases in the input DNA sequence |O| is the length (number of bits) of the output sequence

• Alternatively, if our DNA string is already encoded canonically, we can define the compression ratio of a compression algorithm as

• |I| is the number of bits in the canonical encoding of the input DNA sequence and |O| is the

• length (number of bits) of the output sequence.

I

O-

21

I

O-1

Page 29: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Performance of GenCompress

Page 30: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Recent approaches Encoding of non-repeat regions

• Order-2 arithmetic encoding – the adaptive probability of a symbol is computed from the context (the last 2 symbols) after which it appears – 3 symbols code one amino-acid???

• Context tree weighting coding (CTW) – a tree containing all processed substrings of length k is built dynamically and each path (string) in the tree is weighted by its probability – these probabilities are used in an arithmetic encoder

CTW model estimator

Arithemtic encoder

input Encoded data

Page 31: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Recent approachesEncoding of numbers• Fibonacci encoding

• any positive integer can be uniquely expressed as the sum of distinct Fibonacci numbers so, that no two consecutive Fibonacci numbers are used

• by adding a 1 after the bit corresponding the largest Fibonacci number used in the sum the representation becomes self-delimited

1 2 3 4 8 18

Fibonacci 11 011 0011 1011 000011 0001011

1-shifted Fib. 1 011 0011 00011 001011 01010011

3-shifted Fib. 001 010 011 100 00011 00001011

1,2,3,5,8,13,21,…

Page 32: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Recent approachesEncoding of numbers

• k- Shifted Fibonacci encoding • usually there are many small numbers and few large numbers to

encode • n {1,…,2k – 1} – normal binary encoding• n 2k – as 0k followed by Fibonacci encoding of n – (2k – 1)

1 2 3 4 8 18

Fibonacci 11 011 0011 1011 000011 0001011

1-shited Fib. 1 011 0011 00011 001011 01010011

3-shited Fib. 001 010 011 100 00011 00001011

1,2,3,5,8,13,21,…

Page 33: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Recent approaches

• E.g. DNAPack • uses dynamic programming for selection of segments

(copied, reversed and/or modified)• O(n3) still too slow, hence heuristics used

Page 34: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Conditional compression

• Given sequence z, compress a sequence w relative to z• Let Compress(w | z) denote the length of the compression

of w, given z. Similarly, let Compress(w) = Compress(w | ), where denotes the empty word. [Compress is not the unix compression program compress.]

• In general, Compress(w | z) Compress(z | w).• For example, the Biocompress-2 program produces:

• CompressRatio (brucella | rochalima) = 55.95%, and• CompressRatio (rochalima | brucella) = 34.56%.

• If z=w, then Compress(w | z) is very small.• If z and w are completely independent, then

Compress(w | z) ≈ Compress(w)

Page 35: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Evolutionary distance

• How to define evolutionary distance between strings based on conditional compression?

• E.g.

is used in literature, but it has no good reason.• We need a symmetric measure!• we can use Kolmogorov complexity

2

||,

wzCompresszwCompresszwD

Page 36: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Kolmogorov complexity

• Let K(w | z) denote the Kolmogorov complexity of string w, given z. Informally, this is the length of the shortest program that outputs w given z. Similarly, set

K(w) = K(w |).• The following result is due to Kolmogorov and Levin:• Theorem: Within an additive logarithmic factor,

K(w | z) + K(z) = K(z | w) + K(w).• This implies

K(w) − K(w | z) = K(z) − K(z | w).• Normalizing by the sequence length we obtain the following

symmetric map – “relatedness”:

)(

)()(

)(

)()(),(

wzK

z | w - KzK

wzK

w | z - KwK zwR

Page 37: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

A symmetric measure of similarity

)(

)()(

)(

)()(),(

wzK

z | w - KzK

wzK

w | z - KwK zwR

• A distance between two sequences w and z

• Unfortunately, K(w) and K(w | z) are not computable!• Approximation:

• K(w) := Compress(w) := | GenCompress(w) | • K(w | z) := Compress(w | z) := Compress(zw) − Compress(z) =

| GenCompress(zw) | − | GenCompress(z) |

),(1),( zwRzwD

Page 38: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Application to genomic sequences

Page 39: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Application to genomic sequences

Page 40: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

• 16S rRNA for procaryotes corresponds to 18S rRNA for eucaryotes

Application to genomic sequences

H. butylicus

A. urina

H. glauca

R. globiformis

L.sp. nakagiri

U.crescens

H. gomorrense

Page 41: Probabilities and Probabilistic Models

www.bioalgorithms.infoAn Introduction to Bioinformatics Algorithms

Finding Regulatory Motifs in DNA

Sequences

Micro-array experiments indicate that sets of genes are regulated by common “transcription factors (TFs)”. These attach to the DNA upstream of the coding sequence, at certain binding sites. Such a site displays a short motif of DNA that is specific to a given type of TF.

Page 42: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Random Sample

atgaccgggatactgataccgtatttggcctaggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaatactgggcataaggtaca

tgagtatccctgggatgacttttgggaacactatagtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgaccttgtaagtgttttccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaatggcccacttagtccacttatag

gtcaatcatgttcttgtgaatggatttttaactgagggcatagaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtactgatggaaactttcaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttggtttcgaaaatgctctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcatttcaacgtatgccgaaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttctgggtactgatagca

Page 43: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Implanting Motif AAAAAAAGGGGGGG

atgaccgggatactgatAAAAAAAAGGGGGGGggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaataAAAAAAAAGGGGGGGa

tgagtatccctgggatgacttAAAAAAAAGGGGGGGtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgAAAAAAAAGGGGGGGtccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaatAAAAAAAAGGGGGGGcttatag

gtcaatcatgttcttgtgaatggatttAAAAAAAAGGGGGGGgaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtAAAAAAAAGGGGGGGcaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttAAAAAAAAGGGGGGGctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcatAAAAAAAAGGGGGGGaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttAAAAAAAAGGGGGGGa

Page 44: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Where is the Implanted Motif?

atgaccgggatactgataaaaaaaagggggggggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaataaaaaaaaaggggggga

tgagtatccctgggatgacttaaaaaaaagggggggtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgaaaaaaaagggggggtccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaataaaaaaaagggggggcttatag

gtcaatcatgttcttgtgaatggatttaaaaaaaaggggggggaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtaaaaaaaagggggggcaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttaaaaaaaagggggggctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcataaaaaaaagggggggaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttaaaaaaaaggggggga

Page 45: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Implanting Motif AAAAAAGGGGGGG with Four Mutations

atgaccgggatactgatAgAAgAAAGGttGGGggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaatacAAtAAAAcGGcGGGa

tgagtatccctgggatgacttAAAAtAAtGGaGtGGtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgcAAAAAAAGGGattGtccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaatAtAAtAAAGGaaGGGcttatag

gtcaatcatgttcttgtgaatggatttAAcAAtAAGGGctGGgaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtAtAAAcAAGGaGGGccaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttAAAAAAtAGGGaGccctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcatActAAAAAGGaGcGGaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttActAAAAAGGaGcGGa

Page 46: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Where is the Motif???

atgaccgggatactgatagaagaaaggttgggggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaatacaataaaacggcggga

tgagtatccctgggatgacttaaaataatggagtggtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgcaaaaaaagggattgtccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaatataataaaggaagggcttatag

gtcaatcatgttcttgtgaatggatttaacaataagggctgggaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtataaacaaggagggccaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttaaaaaatagggagccctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcatactaaaaaggagcggaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttactaaaaaggagcgga

Page 47: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Why Finding (15,4) Motif is Difficult?

atgaccgggatactgatAgAAgAAAGGttGGGggcgtacacattagataaacgtatgaagtacgttagactcggcgccgccg

acccctattttttgagcagatttagtgacctggaaaaaaaatttgagtacaaaacttttccgaatacAAtAAAAcGGcGGGa

tgagtatccctgggatgacttAAAAtAAtGGaGtGGtgctctcccgatttttgaatatgtaggatcattcgccagggtccga

gctgagaattggatgcAAAAAAAGGGattGtccacgcaatcgcgaaccaacgcggacccaaaggcaagaccgataaaggaga

tcccttttgcggtaatgtgccgggaggctggttacgtagggaagccctaacggacttaatAtAAtAAAGGaaGGGcttatag

gtcaatcatgttcttgtgaatggatttAAcAAtAAGGGctGGgaccgcttggcgcacccaaattcagtgtgggcgagcgcaa

cggttttggcccttgttagaggcccccgtAtAAAcAAGGaGGGccaattatgagagagctaatctatcgcgtgcgtgttcat

aacttgagttAAAAAAtAGGGaGccctggggcacatacaagaggagtcttccttatcagttaatgctgtatgacactatgta

ttggcccattggctaaaagcccaacttgacaaatggaagatagaatccttgcatActAAAAAGGaGcGGaccgaaagggaag

ctggtgagcaacgacagattcttacgtgcattagctcgcttccggggatctaatagcacgaagcttActAAAAAGGaGcGGa

AgAAgAAAGGttGGG

cAAtAAAAcGGcGGG..|..|||.|..|||

Page 48: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Challenge Problem(Pevzner and Sze)

• Find a motif in a sample of

- 20 “random” sequences (e.g. 600 nt long)

- each sequence containing an implanted

pattern of length 15,

- each pattern appearing with 4 mismatches

as (15,4)-motif.

Page 49: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Combinatorial Gene Regulation• A microarray experiment showed that when gene X is

knocked out, 20 other genes are not expressed

• How can one gene have such drastic effects?

Page 50: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Regulatory Proteins• Gene X encodes regulatory protein, a.k.a. a transcription

factor (TF)

• The 20 unexpressed genes rely on gene X’s TF to induce transcription

• A single TF may regulate multiple genes

Page 51: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Regulatory Regions• Every gene contains a regulatory region (RR) typically

stretching 100-1000 bp upstream of the transcriptional start site

• Located within the RR are the Transcription Factor Binding Sites (TFBS), also known as motifs, specific for a given transcription factor

• TFs influence gene expression by binding to a specific location in the respective gene’s regulatory region - TFBS

• So finding the same motif in multiple genes’ regulatory regions suggests a regulatory relationship amongst those genes

Page 52: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motifs and Transcriptional Start Sites

geneATCCCG

geneTTCCGG

geneATCCCG

geneATGCCG

geneATGCCC

• A TFBS can be located anywhere within the Regulatory Region.

• TFBS may vary slightly across different regulatory regions since non-essential bases could mutate

Page 53: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Transcription Factors and Motifs

Page 54: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Identifying Motifs: Complications

• We do not know the motif sequence

• We do not know where it is located relative to the genes start

• Motifs can differ slightly from one gene to the next

• How to discern it from “random” motifs?

Page 55: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

A Motif Finding Analogy

• The Motif Finding Problem is similar to the problem posed by Edgar Allan Poe (1809 – 1849) in his Gold Bug story

Page 56: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Problem

• Given a secret message:

53++!305))6*;4826)4+.)4+);806*;48!8`60))85;]8*:+*8!83(88)5*!; 46(;88*96*?;8)*+(;485);5*!2:*+(;4956*2(5*-4)8`8*; 4069285);)6!8)4++;1(+9;48081;8:8+1;48!85;4)485!528806*81(+9;48;(88;4(+?34;48)4+;161;:188;+?;

• Decipher the message encrypted in the fragment

Page 57: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Hints for The Gold Bug Problem • Additional hints:

• The encrypted message is in English• Each symbol correspond to one letter in the English

alphabet• No punctuation marks are encoded

Page 58: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Problem: Symbol Counts

• Naive approach to solving the problem:• Count the frequency of each symbol in the encrypted

message• Find the frequency of each letter in the alphabet in the

English language• Compare the frequencies of the previous steps, try to find a

correlation and map the symbols to a letter in the alphabet

Page 59: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Symbol Frequencies in the Gold Bug Message

• Gold Bug Message:

• English Language:

e t a o i n s r h l d c u m f p g w y b v k x j q zMost frequent Least frequent

Symbol 8 ; 4 ) + * 5 6 ( ! 1 0 2 9 3 : ? ` - ] .

Frequency 34 25 19 16 15 14 12 11 9 8 7 6 5 5 4 4 3 2 1 1 1

Page 60: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Message Decoding: First Attempt• By simply mapping the most frequent symbols to the most

frequent letters of the alphabet:

sfiilfcsoorntaeuroaikoaiotecrntaeleyrcooestvenpinelefheeosnlt

arhteenmrnwteonihtaesotsnlupnihtamsrnuhsnbaoeyentacrmuesotorl

eoaiitdhimtaecedtepeidtaelestaoaeslsueecrnedhimtaetheetahiwfa

taeoaitdrdtpdeetiwt

• The result does not make sense

Page 61: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Problem: l-tuple count• A better approach:

• Examine frequencies of l-tuples, combinations of 2 symbols, 3 symbols, etc.

• “The” is the most frequent 3-tuple in English and “;48” is the most frequent 3-tuple in the encrypted text

• Make inferences of unknown symbols by examining other

frequent l-tuples

Page 62: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Problem: the ;48 clue

• Mapping “the” to “;48” and substituting all occurrences of the symbols:

53++!305))6*the26)h+.)h+)te06*the!e`60))e5t]e*:+*e!e3(ee)5*!t

h6(tee*96*?te)*+(the5)t5*!2:*+(th956*2(5*-h)e`e*th0692e5)t)6!e

)h++t1(+9the0e1te:e+1the!e5th)he5!52ee06*e1(+9thet(eeth(+?3ht

he)h+t161t:1eet+?t

Page 63: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Message Decoding: Second Attempt• Make inferences:

53++!305))6*the26)h+.)h+)te06*the!e`60))e5t]e*:+*e!e3(ee)5*!th6(tee*96*?te)*+(the5)t5*!2:*+(th956*2(5*-h)e`e*th0692e5)t)6!e)h++t1(+9the0e1te:e+1the!e5th)he5!52ee06*e1(+9thet(eeth(+?3hthe)h+t161t:1eet+?t

• “thet(ee” most likely means “the tree”• Infer “(“ = “r”

• “th(+?3h” becomes “thr+?3h”• Can we guess “+” and “?”?

Page 64: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Gold Bug Problem: The Solution• After figuring out all the mappings, the final message is:

agoodglassinthebishopshostelinthedevilsseatwenyonedegreesandt

hirteenminutesnortheastandbynorthmainbranchseventhlimbeastside

shootfromthelefteyeofthedeathsheadabeelinefromthetreethrought

heshotfiftyfeetout

• Punctuation is important:

A GOOD GLASS IN THE BISHOP’S HOSTEL IN THE DEVIL’S SEA,

TWENY ONE DEGREES AND THIRTEEN MINUTES NORTHEAST AND BY NORTH,

MAIN BRANCH SEVENTH LIMB, EAST SIDE, SHOOT FROM THE LEFT EYE OF

THE DEATH’S HEAD A BEE LINE FROM THE TREE THROUGH THE SHOT,

FIFTY FEET OUT.

Page 65: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Solving the Gold Bug Problem

Prerequisites to solve the problem:

• Need to know the relative frequencies of single letters, and combinations of two and three letters in English

• Knowledge of all the words in the English dictionary is highly desired to make accurate inferences

Page 66: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

• Nucleotides in motifs encode for a message in the “genetic” language. Symbols in “The Gold Bug” encode for a message in English

• In order to solve the problem, we analyze the frequencies of patterns in DNA/Gold Bug message.

• Knowledge of established regulatory motifs makes the Motif Finding problem simpler. Knowledge of the words in the English dictionary helps to solve the Gold Bug problem.

Motif Finding and The Gold Bug Problem: Similarities

Page 67: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Similarities (cont’d)

• Motif Finding:• In order to solve the problem, we analyze the frequencies

of patterns in the nucleotide sequences

• Gold Bug Problem:• In order to solve the problem, we analyze the frequencies

of patterns in the text written in English

Page 68: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Similarities (cont’d)

• Motif Finding:• Knowledge of established motifs reduces the complexity of

the problem

• Gold Bug Problem:• Knowledge of the words in the dictionary is highly desirable

Page 69: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif Finding and The Gold Bug Problem: DifferencesMotif Finding is harder than Gold Bug problem:

• We don’t have the complete dictionary of motifs• The “genetic” language does not have a standard

“grammar”• Only a small fraction of nucleotide sequences encode for

motifs; the size of data is enormous

Page 70: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem

• Given a random sample of DNA sequences:

cctgatagacgctatctggctatccacgtacgtaggtcctctgtgcgaatctatgcgtttccaaccat

agtactggtgtacatttgatacgtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc

aaacgtacgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt

agcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtacgtataca

ctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaacgtacgtc

• Find the pattern that is implanted in each of the individual sequences, namely, the motif

Page 71: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem (cont’d)

• Additional information:

• The hidden sequence is of length 8

• The pattern is not exactly the same in each array because random point mutations may occur in the sequences

Page 72: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem (cont’d)

• The patterns revealed with no mutations:

cctgatagacgctatctggctatccacgtacgtaggtcctctgtgcgaatctatgcgtttccaaccat

agtactggtgtacatttgatacgtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc

aaacgtacgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt

agcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtacgtataca

ctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaacgtacgtc

acgtacgtConsensus String

Page 73: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem (cont’d)

• The patterns with 2 point mutations:

cctgatagacgctatctggctatccaGgtacTtaggtcctctgtgcgaatctatgcgtttccaaccat

agtactggtgtacatttgatCcAtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc

aaacgtTAgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt

agcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtCcAtataca

ctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaCcgtacgGc

Page 74: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem (cont’d)

• The patterns with 2 point mutations:

cctgatagacgctatctggctatccaGgtacTtaggtcctctgtgcgaatctatgcgtttccaaccat

agtactggtgtacatttgatCcAtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc

aaacgtTAgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt

agcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtCcAtataca

ctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaCcgtacgGc

Can we still find the motif, now that we have 2 mutations?

Page 75: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Defining Motifs

• To define a motif, let us say we know where the motif starts in the sequence

• The motif start positions in their sequences can be represented as s = (s1,s2,s3,…,st)

Page 76: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motifs: Profiles and Consensus a G g t a c T t C c A t a c g tAlignment a c g t T A g t a c g t C c A t C c g t a c g G

_________________

A 3 0 1 0 3 1 1 0Profile C 2 4 0 0 1 4 0 0 G 0 1 4 0 0 0 3 1 T 0 0 0 5 1 0 1 4

_________________

Consensus A C G T A C G T

• Line up the patterns by their start indexes

s = (s1, s2, …, st)

• Construct matrix profile with frequencies of each nucleotide in columns

• Consensus nucleotide in each position has the highest score in column

Page 77: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Consensus• Think of consensus as an “ancestor” motif, from which

mutated motifs emerged

• The distance between a real motif and the consensus sequence is generally less than that for two real motifs

Page 78: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Evaluating Motifs

• We have a guess about the consensus sequence, but how “good” is this consensus?

• Need to introduce a scoring function to compare different guesses and choose the “best” one.

• t – number of sample DNA sequences• n – length of each DNA sequence• DNA – sample of DNA sequences (t x n array)

• l – length of the motif (l-mer)• si – starting position of an l-mer in sequence i

• s=(s1, s2,… st) – array of motif’s starting positions

Page 79: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Parameters

cctgatagacgctatctggctatccaGgtacTtaggtcctctgtgcgaatctatgcgtttccaaccat

agtactggtgtacatttgatCcAtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc

aaacgtTAgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt

agcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtCcAtataca

ctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaCcgtacgGc

l = 8

t=5

s1 = 26 s2 = 21 s3= 3 s4 = 56 s5 = 60 s

DNA

n = 69

Page 80: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Scoring Motifs

• Given s = (s1, … st ) and DNA:

Score(s,DNA) =

a G g t a c T t C c A t a c g t a c g t T A g t a c g t C c A t C c g t a c g G _________________ A 3 0 1 0 3 1 1 0 C 2 4 0 0 1 4 0 0 G 0 1 4 0 0 0 3 1 T 0 0 0 5 1 0 1 4 _________________

Consensus a c g t a c g t

Score 3+4+4+5+3+4+3+4=30

l

t

l

i GCTAkikcount

1 },,,{),(max

Page 81: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem

• If starting positions s=(s1, s2,… st) are given, finding consensus is easy even with mutations in the sequences because we can simply construct the profile to find the motif (consensus)

• But… the starting positions s are usually not given. How can we find the “best” profile matrix?

Page 82: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem: Formulation• Goal: Given a set of DNA sequences, find a set of l-mers, one

from each sequence, that maximizes the consensus score

• Input: A t x n matrix of DNA, and l, the length of the pattern to find

• Output: An array of l starting positions s = (s1, s2, … st) maximizing Score(s,DNA)

Page 83: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Motif Finding Problem: Brute Force Solution

• Compute the scores for each possible combination of starting positions s

• The best score will determine the best profile and the consensus pattern in DNA

• The goal is to maximize Score(s,DNA) by varying the starting positions si, where:

si = [1, …, n - l +1]i = [1, …, t ]

Page 84: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

BruteForceMotifSearch

1. BruteForceMotifSearch(DNA, t, n, l)• bestScore 0• for each s=(s1,s2 , . . ., st) from (1,1 . . . 1)

to (n- l +1, . . ., n- l +1)

• if (Score (s,DNA) > bestScore )• bestScore score(s, DNA )• bestMotif (s1,s2 , . . . , st ) • return bestMotif

Page 85: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Running Time of BruteForceMotifSearch• Varying (n - l + 1) positions in each of t sequences, we are

looking at (n - l + 1) t sets of starting positions

• For each set of starting positions, the scoring function makes l operations, so complexity is l (n – l + 1)

t = O(l n t

)

• That means that for t = 8, n = 1000, l = 10 we must perform approximately 10

20 computations – it will take billions years

Page 86: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Median String Problem

• Given a set of t DNA sequences find a pattern that appears in all t sequences with the minimum number of mutations

• This pattern will be the motif

Page 87: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Hamming Distance

• Hamming distance: • dH(v,w) is the number of nucleotide pairs that do not match

when v and w are aligned. For example:

dH(AAAAAA, ACAAAC) = 2

Page 88: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Total Distance: An Example

• Given v = “acgtacgt” and s acgtacgt

cctgatagacgctatctggctatccacgtacAtaggtcctctgtgcgaatctatgcgtttccaaccat acgtacgtagtactggtgtacatttgatacgtacgtacaccggcaacctgaaacaaacgctcagaaccagaagtgc acgtacgtaaaAgtCcgtgcaccctctttcttcgtggctctggccaacgagggctgatgtataagacgaaaatttt acgtacgtagcctccgatgtaagtcatagctgtaactattacctgccacccctattacatcttacgtacgtataca acgtacgtctgttatacaacgcgtcatggcggggtatgcgttttggtcgtcgtacgctcgatcgttaacgtaGgtc

v is the sequence in red, x is the sequence in blue

• TotalDistance(v,DNA) = 1+0+2+0+1 = 4

dH(v, x) = 2

dH(v, x) = 1

dH(v, x) = 0

dH(v, x) = 0

dH(v, x) = 1

Page 89: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Total Distance: Definition

• For each DNA sequence i, compute all dH(v, x), where x is an l-mer with starting position si

(1 < si < n – l + 1)

• Find minimum of dH(v, x) among all l -mers in sequence i

• TotalDistance(v,DNA) is the sum of the minimum Hamming distances for each DNA sequence i

• TotalDistance(v,DNA) = mins dH(v, s), where s is the set of starting positions s1, s2,… st

Page 90: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Median String Problem: Formulation• Goal: Given a set of DNA sequences, find a median string• Input: A t x n matrix DNA, and l, the length of the pattern to

find• Output: A string v of l nucleotides that minimizes

TotalDistance(v,DNA) over all strings of that length

Page 91: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Median String Search Algorithm1. MedianStringSearch (DNA, t, n, l)• bestWord AAA…A• bestDistance ∞• for each l -mer s from AAA…A to TTT…T • if TotalDistance(s,DNA) < bestDistance• bestDistanceTotalDistance(s,DNA) • bestWord s• return bestWord

Page 92: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif Finding Problem = Median String Problem

• The Motif Finding is a maximization problem while Median String is a minimization problem

• However, the Motif Finding problem and Median String problem are computationally equivalent

• Need to show that minimizing TotalDistance is equivalent to maximizing Score

Page 93: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

We are looking for the same thing

a G g t a c T t C c A t a c g tAlignment a c g t T A g t a c g t C c A t C c g t a c g G _________________ A 3 0 1 0 3 1 1 0Profile C 2 4 0 0 1 4 0 0 G 0 1 4 0 0 0 3 1 T 0 0 0 5 1 0 1 4 _________________

Consensus a c g t a c g t

Score 3+4+4+5+3+4+3+4

TotalDistance 2+1+1+0+2+1+2+1

Sum 5 5 5 5 5 5 5 5

• At any column iScorei + TotalDistancei = t

• Because there are l columns Score + TotalDistance = l * t

• Rearranging:Score = l * t - TotalDistance

• l * t is constant the minimization of the right side is equivalent to the maximization of the left side

l

t

Page 94: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif Finding Problem vs. Median String Problem

• Why bother reformulating the Motif Finding problem into the Median String problem?

• The Motif Finding Problem needs to examine all the combinations for s. That is (n - l + 1)t combinations!!!

• The Median String Problem needs to examine all 4l combinations for v. This number is relatively smaller

Page 95: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif Finding: Improving the Running TimeRecall the BruteForceMotifSearch:

1. BruteForceMotifSearch(DNA, t, n, l)2. bestScore 03. for each s=(s1,s2 , . . ., st) from (1,1 . . . 1) to (n- l +1, . . ., n-

l+1)4. if (Score(s,DNA) > bestScore)5. bestScore Score(s, DNA)6. bestMotif (s1,s2 , . . . , st)

7. return bestMotif

Branch-bound searching

Page 96: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Structuring the Search

• How can we perform the line

for each s=(s1,s2 , . . ., st) from (1,1 . . . 1) to (n-l+1, . . ., n-l+1) ?

• We need a method for efficiently structuring and navigating the many possible motifs

• This is not very different than exploring all t-digit numbers

Page 97: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Median String: Improving the Running Time1. MedianStringSearch (DNA, t, n, l)

2. bestWord AAA…A

3. bestDistance ∞

4. for each l-mer s from AAA…A to TTT…T

5. if TotalDistance(s,DNA) < bestDistance

6. bestDistance TotalDistance(s,DNA)

7. bestWord s

8. return bestWord

Page 98: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Structuring the Search

• For the Median String Problem we need to consider all 4l possible l-mers:

aa… aaaa… acaa… agaa… at

.

.tt… tt

How to organize this search?

l

Page 99: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Alternative Representation of the Search Space

• Let A = 1, C = 2, G = 3, T = 4• Then the sequences from AA…A to TT…T become:

11…1111…1211…1311…14..

44…44• Notice that the sequences above simply list all numbers as if we

were counting on base 4 without using 0 as a digit

l

Page 100: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Linked List

• Suppose l = 2

aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt

• Need to visit all the predecessors of a sequence before visiting the sequence itself

Start

Page 101: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Linked List (cont’d)

• Linked list is not the most efficient data structure for motif finding

• Let’s try grouping the sequences by their prefixes

aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt

Page 102: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Search Tree

a- c- g- t-

aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt

--

root

Page 103: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Analyzing Search Trees• Characteristics of the search trees:

• The sequences are contained in its leaves• The parent of a node is the prefix of its children

• How can we move through the tree?

Page 104: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Moving through the Search Trees• Four common moves in a search tree that we are about to

explore:• Move to the next leaf• Visit all the leaves• Visit the next node• Bypass the children of a node

Page 105: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Visit the Next Leaf

1. NextLeaf( a,L, k ) // a : the array of digits2. for i L to 1 // L: length of the array3. if ai < k // k : max digit value4. ai ai + 15. return a6. ai 17. return a

Given a current leaf a , we need to compute the “next” leaf:

Page 106: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

NextLeaf (cont’d)

• The algorithm is common addition in radix k:

• Increment the least significant digit

• “Carry the one” to the next digit position when the digit is at maximal value

Page 107: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

NextLeaf: Example

• Moving to the next leaf:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--Current Location

Page 108: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

NextLeaf: Example (cont’d)

• Moving to the next leaf:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--Next Location

Page 109: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Visit All Leaves• Printing all permutations in ascending order:

1. AllLeaves(L,k) // L: length of the sequence2. a (1,...,1) // k : max digit value3. while forever // a : array of digits4. output a5. a NextLeaf(a,L,k)6. if a = (1,...,1)7. return

Page 110: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Visit All Leaves: Example

• Moving through all the leaves in order:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

--

Order of steps

Page 111: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Depth First Search

• So we can search leaves

• How about searching all vertices of the tree?

• We can do this with a depth first search

Page 112: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Visit the Next Vertex1. NextVertex(a,i,L,k) // a : the array of digits2. if i < L // i : prefix length 3. a i+1 1 // L: max length4. return ( a,i +1) // k : max digit value5. else6. for j l to 17. if aj < k8. aj aj +19. return( a,j )10. return(a,0)

Page 113: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Example

• Moving to the next vertex:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--Current Location

Page 114: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Example

• Moving to the next vertices:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--

Location after 5 next vertex moves

Page 115: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Bypass Move

• Given a prefix (internal vertex), find next vertex after skipping all its children

1. Bypass(a,i,L,k) // a: array of digits2. for j i to 1 // i : prefix length3. if aj < k // L: maximum length

4. aj aj +1 // k : max digit value

5. return(a,j)6. return(a,0)

Page 116: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Bypass Move: Example

• Bypassing the descendants of “2-”:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--Current Location

Page 117: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Example

• Bypassing the descendants of “2-”:

1- 2- 3- 4-

11 12 13 14 21 22 23 24 31 32 33 34 41 42 43 44

--Next Location

Page 118: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Revisiting Brute Force Search

• Now that we have method for navigating the tree, lets look again at BruteForceMotifSearch

Page 119: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Brute Force Search Again

1. BruteForceMotifSearchAgain(DNA, t, n, l)2. s (1,1,…, 1)3. bestScore Score(s,DNA)4. while forever5. s NextLeaf (s, t, n- l +1)6. if (Score(s,DNA) > bestScore)7. bestScore Score(s, DNA)8. bestMotif (s1,s2 , . . . , st) 9. return bestMotif

Page 120: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Can We Do Better?• Sets of s=(s1, s2, …,st) may have a weak profile for the first i

positions (s1, s2, …,si)• Every row of alignment may add at most l to Score• Optimism: if all subsequent (t-i) positions (si+1, …st) add

(t – i ) * l to Score(s,i,DNA)

• If Score(s,i,DNA) + (t – i ) * l < BestScore, it makes no sense to search in vertices of the current subtree• Use ByPass()

Page 121: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Branch and Bound Algorithm for Motif Search

• Since each level of the tree goes deeper into search, discarding a prefix discards all following branches

• This saves us from looking at (n – l + 1)t-i leaves• Use NextVertex() and

ByPass() to navigate the tree

Page 122: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Pseudocode for Branch and Bound Motif Search1. BranchAndBoundMotifSearch(DNA,t,n,l)2. s (1,…,1)3. bestScore 04. i 15. while i > 06. if i < t7. optimisticScore Score(s, i, DNA) +(t – i ) * l8. if optimisticScore < bestScore9. (s, i) Bypass(s,i, n- l +1)10. else 11. (s, i) NextVertex(s, i, n- l +1)12. else 13. if Score(s,DNA) > bestScore14. bestScore Score(s)15. bestMotif (s1, s2, s3, …, st)16. (s,i) NextVertex(s,i,t,n- l + 1)17. return bestMotif

Page 123: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Median String Search Improvements• Recall the computational differences between motif search

and median string search

• The Motif Finding Problem needs to examine all (n-l+1)t combinations for s.

• The Median String Problem needs to examine 4l combinations of v. This number is relatively small

• We want to use median string algorithm with the Branch and Bound trick!

Page 124: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Branch and Bound Applied to Median String Search• Note that if the total distance for a prefix is greater than that

for the best word so far:

TotalDistance (prefix, DNA ) > BestDistance

there is no use exploring the remaining part of the word

• We can eliminate that branch and BYPASS exploring that branch further

Page 125: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Bounded Median String Search1. BranchAndBoundMedianStringSearch(DNA,t,n,l )2. s (1,…,1)3. bestDistance ∞4. i 15. while i > 06. if i < l7. prefix string corresponding to the first i nucleotides of s8. optimisticDistance TotalDistance(prefix,DNA)9. if optimisticDistance > bestDistance10. (s, i ) Bypass(s,i, l, 4)11. else 12. (s, i ) NextVertex(s, i, l, 4)13. else 14. word nucleotide string corresponding to s15. if TotalDistance(s,DNA) < bestDistance16. bestDistance TotalDistance(word, DNA)17. bestWord word18. (s,i ) NextVertex(s,i, l, 4)19. return bestWord

Page 126: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Improving the Bounds

• Given an l-mer w, divided into two parts at point i• u : prefix w1, …, wi, • v : suffix wi+1, ..., wl

• Find minimum distance for u in a sequence

• No instances of u in the sequence have distance less than the minimum distance

• Note this doesn’t tell us anything about whether u is part of any motif. We only get a minimum distance for prefix u

Page 127: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Improving the Bounds (cont’d)

• Repeating the process for the suffix v gives us a minimum distance for v

• Since u and v are two substrings of w, and included in motif w, we can assume that the minimum distance of u plus minimum distance of v can only be less than the minimum distance for w

Page 128: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Better Bounds

Page 129: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Better Bounds (cont’d)

• If d(prefix) + d(suffix) > bestDistance:

• Motif w (prefix.suffix) cannot give a better (lower) score than d(prefix) + d(suffix)

• In this case, we can ByPass()

Page 130: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Better Bounded Median String Search1. ImprovedBranchAndBoundMedianString(DNA,t,n,l)2. s = (1, 1, …, 1)3. bestdistance = ∞4. i = 15. while i > 06. if i < l7. prefix = nucleotide string corresponding to (s1, s2, s3, …, si )8. optimisticPrefixDistance = TotalDistance (prefix, DNA)9. if (optimisticPrefixDistance < bestsubstring[ i ])10. bestsubstring[ i ] = optimisticPrefixDistance11. if (l - i < i )12. optimisticSufxDistance = bestsubstring[l -i ] 13. else14. optimisticSufxDistance = 0;15. if optimisticPrefixDistance + optimisticSufxDistance > bestDistance16. (s, i ) = Bypass(s, i, l, 4)17. else18. (s, i ) = NextVertex(s, i, l, 4)19. else20. word = nucleotide string corresponding to (s1,s2, s3, …, st)21. if TotalDistance( word, DNA) < bestDistance22. bestDistance = TotalDistance(word, DNA)23. bestWord = word24. (s,i) = NextVertex(s, i, l, 4)25. return bestWord

Page 131: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

More on the Motif Problem

• Exhaustive Search and Median String are both exact algorithms

• They always find the optimal solution, though they may be too slow to perform practical tasks

• Many algorithms sacrifice optimal solution for speed

Page 132: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

CONSENSUS: Greedy Motif Search

• Find two closest l-mers in sequences 1 and 2 and forms 2 x l alignment matrix with Score(s,2,DNA)

• At each of the following t-2 iterations CONSENSUS finds a “best” l-mer in sequence i from the perspective of the already constructed (i-1) x l alignment matrix for the first (i-1) sequences

• In other words, it finds an l-mer in sequence i maximizing

Score(s,i,DNA)

under the assumption that the first (i-1) l-mers have been already chosen

• CONSENSUS sacrifices optimal solution for speed: in fact the bulk of the time is actually spent locating the first 2 l-mers

Page 133: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Some Motif Finding Programs

• CONSENSUS

Hertz, Stromo (1989)• GibbsDNA

Lawrence et al (1993)• MEME

Bailey, Elkan (1995)• RandomProjections

Buhler, Tompa (2002)

• MULTIPROFILER Keich, Pevzner (2002)

• MITRA

Eskin, Pevzner (2002)• Pattern Branching

Price, Pevzner (2003)

Page 134: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Planted Motif Challenge

• Input:• n sequences of length m each.

• Output: • Motif M, of length l• Variants of interest have a hamming distance of d from M

Page 135: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

When is the Problem Solvable?

• Assume that the background sequences are independent and identically-distributed (i.i.d.)

• the probability that a given l-mer C occurs with up to d substitutions at a given position of a random sequence is:

• the expected number of length l motifs that occur with up to d substitutions at least once in each of the t random length n sequences is:

• Very rough estimate – overlapping motifs not modelled, and the assumption of i.i.d. background distribution is usually incorrect.

d

i

ili

dl i

lp

0),( 4

1

4

3

tlndl

l pntdlE ))1(1(4),,,( 1),(

Page 136: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

When is the Problem Solvable?

• 20 random sequences of length 600 are expected to contain more than one (9, 2)-motif by chance, whereas the chances of finding a random (10, 2)-motif are less than 10−7.

• So, the (9, 2) problem is impossible to solve, because “random motifs” are as likely as the planted motif. However, for the (10, 2) the probability of a random motif occurring is very small.

Page 137: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

How to proceed?

• Exhaustive search?

• Run time is high

Page 138: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Heuristic search

• Searching the space of starting positions• Gibbs sampling• The Projection Algorithm

• Searching the space of motifs• Pattern Branching• Profile Branching

Page 139: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Notation

• t sequences s1,…,st, each of length n

• l >0 integer; the goal is to find an l-mer in each of the sequences such that the „similarity“ between these l-mers is maximized

• Let (a1, . . . , at) be a list of l-mers contained in s1, . . . , st. These form a t × l alignment matrix.

• Let X(a) = (xij) denote the corresponding 4×l profile, where xij denotes the frequency with which we observe nucleotide i at position j. Usually, we add pseudo counts to ensure that X does not contain any zeros (Laplace correction).

Page 140: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Greedy profile search

• the probability that a given l-mer a was generated by a given profile X

• Any l-mer that is similar to the consensus string of X will have a “high” probability, while dissimilar ones will have “low” probabilities.

l

iiai

xXaP1

),(

Page 141: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Greedy profile search

• P(CAGGTAAGT | X) = 0.02417294365920 and• P(TCCGTCCCA | X) = 0.00000000982800

1 2 3 4 5 6 7 8 9

A .33 .60 .08 0 0 .49 .71 .06 .15

C .37 .13 .04 0 0 .03 .07 .05 .19

G .18 .14 .81 1 0 .45 .12 .84 .20

T .12 .13 .07 0 1 .03 .09 .05 .46

l

iiai

xXaP1

),(

Page 142: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Greedy profile search

• For a given profile X and a sequence s we can find the X-most probable l-mer in s

• Algorithm: “start with a random seed profile and then attempt to improve on it using a greedy strategy”

• Given sequences s1,…,st of length n, randomly select one l-mer ai for each sequence si and construct an initial profile X. For each sequence si, determine the X-most probable l-mer a'i . Set X equal to the profile obtained from a’1,…, a’t and repeat.

• Does not work well • the number of possible seeds is huge

• In each iteration, the greedy profile search method can change any or all t of the profile l-mers and thus will jump around in the search space.

)|(maxarg XaPa

Page 143: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Gibbs Sampling

• “start with a random seed profile, then change one l-mer per iteration.”• Randomly select an l-mer ai in each input sequence si.

• Randomly select one input sequence sh.

• Build a 4 × l profile X from a1,…, ah−1, ah+1,…, at.• Compute background frequencies Q from input sequences

s1,…, sh−1, sh+1,…,st.

• For each l-mer a sh, compute

• Set ah = a, for some a sh chosen randomly with probability

• Repeat until “converged”

)|(

)|()(

QaP

XaPaw

hs')'(

)(

aaw

aw

Page 144: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Gibbs Sampling

• often works well in practice• difficulties:

• finding subtle motifs.• its performance degrades if the input sequences are skewed, that

is, if some nucleotides occur much more often than others. The algorithm may be attracted to low complexity regions like AAAAAA....

• modifications: • use “relative entropies” rather than frequencies.• Another modification is the use of “phase shifts”: The algorithm

can get trapped in local minima that are shifted up or down a few positions from the strongest pattern. To address this, every in Mth iteration the algorithm tries shifting some ai up or down a few positions.

Page 145: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm

• “choose k of l positions at random, then use the k selected positions of each l-mer x as a hash function h(x). When a sufficient number of l-mers hash to the same bucket, it is likely to be enriched for the planted motif”

s1

s2

s3

s4

xxxxoxox

xxxxxxox

xxxxoxox

xxxxoxox

Hashed to the same bucket

Page 146: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm

• Choose distinct k of the l positions at random. For an l-mer x, the hash function h(x) is obtained by concatenating the selected k residues of x.

• If M is the (unknown) motif, then we call the bucket with hash value h(M) the planted bucket.

• The key idea is that, if k < l − d, then there is a good chance that some the t planted instances of M will be hashed to the planted bucket, namely all planted instances for which the k hash positions and d substituted positions are disjoint.

• So, there is a good chance that the planted bucket will be enriched for the planted motif, and will contain more entries then an average bucket.

Page 147: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – an example• s1 cagtaat

• s2 ggaactt

• s3 aagcaca

• and the (unknown) (3, 1)-motif M = aaa, hashing with k = 2 using the first 2 of l = 3 positions produces the following hash table:

• The motif M is planted at positions (1, 5), (2, 3) and (3, 1) and in this example, all three instances hash to the planted bucket h(M) =aa.

h(x) positions h(x) positions h(x) positions

aa (1,5), (2,3), (3,1) cg - ta (1,4)

ac (2,4), (3,5) ct (2,5) tc -

ag (1,2), (3,2) ga (2,2) tg -

at (1,6) gc (3,3) tt (2,6)

ca (1,1), (3,4), (3,6) gg (2.1)

cc - gt (1.2)

Page 148: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – finding the planted bucket• the algorithm does not know which bucket is the planted

bucket.• it attempts to recover the motif from every bucket that

contains at least s elements, where s is a threshold that is set so as to identify buckets that look as if they may be the planted bucket.

• In other words, the first part of the Projection algorithm is a heuristic for finding promising sets of l-mers in the sequence. It must be followed by a refinement step that attempts to generate a motif from each such set.

• The algorithm has three main parameters:• the projection size k,• the bucket (inspection) threshold s, and• and the number of independent trails m.

Page 149: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – projection size• the algorithm should hash a significant number of instances of the

motif into the planted bucket, while avoiding contamination of the planted bucket by random background l-mers.

• What k to choose so that the average bucket will contain less than 1 random l-mer?

• Since we are hashing t (n − l + 1) l-mers into 4k buckets, if we choose k such that 4k > t (n − l + 1), then the average bucket will contain less than one random l-mer.

• For example, in the Challenge (15, 4)-Problem, with t = 20 and n = 600, we must choose k to satisfy k < l − d = 15 − 4 = 11 and

76,6)4log(

))115600(20log(

)4log(

))1(log(

n-ltk

Page 150: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – bucket threshold• In the Challenge Problem, a bucket size of s = 3 or 4 is

practical, as we should not expect too many instances to hash to the same bucket in a reasonable number of trails.

• If the total amount of sequence is very large, then it may be that one cannot choose k to satisfy both

• k < l −d and 4k > t(n−l +1). In this case, set k = l −d−1, as large as possible, and set the bucket threshold s to twice the average bucket size t(n − l + 1)/4k.

Page 151: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – Number of independent trails• Our goal: to choose m so that the probability is at least q = 0.95 that

the planted bucket contains s or more planted motif instances in at least one of the m trails.

• let p’(l, d, k) be the probability that a given planted motif instance hashes to the planted bucket, that is:

• Then the probability that fewer than s planted instances hash to the planted bucket in a given trail is Bt,p’(l,d,k)(s) . Here, Bt,p(s) is the probability that there are fewer than s successes in t independent Bernoulli trails, each trial having probability p of success (binomial probability distribution function).

k

lk

dl

kdlp ),,('

Page 152: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

The Projection Algorithm – Number of independent trails• If the algorithm is run for m trails, the probability that s or more

planted instances hash to the planted bucket in at least one trail is:

1 − (Bt,p’(l,d,k)(s) )m q.

• To satisfy this equation, choose:

• Using this criterion for m, the choices for k and s above require at most thousands of trails, and usually many fewer, to produce a bucket containing sufficiently many instances of the planted motif.

)(log(

)1log(

),,(', sB

qm

kdlpt

Page 153: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Projection Algorithm

1. Choose k of the l positions at random2. Hash all l-mers of the given sequences into

buckets3. Inspect all buckets with more than s positions

and refine the found motifs4. Repeat m times, return the motif with the best

score

Page 154: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif refinement

• we have already found k of the planted motif residues. These, together with the remaining l − k residues, should provide a strong signal that makes it easy to obtain the motif in only a few iterations of refinement.

• We will process each bucket of size s to obtain a candidate motif. Each of these candiates will be “refined” and the best refinement will be returned as the final solution.

• Candidate motifs are refined using the expectation maximization (EM) algorithm. This is based on the following probabilistic model:• An instance of some length-l motif occurs exactly once per input

sequence.• Instances are generated from a 4 × l weight matrix model W, whose (i,

j)th entry gives the probability that base i occurs in position j of an instance, independent of its other positions.

• The remaining n−l residues in each sequence are chosen randomly and independently according to some background distribution.

Page 155: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Motif refinement

• Let S be a set of t input sequences, and let P be the background distribution. EM-based refinement seeks a weight matrix model W that maximizes the likelihood ratio

• that is, a motif model that explains the input sequences much better than P alone.

• The position at which the motif occurs in each sequence is not fixed a priori, making the computation of W difficult, because Pr(S | W, P) must be summed over all possible locations of the instances.

• To address this, the EM algorithm uses an iterative calculation that, given an initial guess W0 at the motif model, converges linearly to a locally maximum-likelihood model in the neighborhood of W0.

• An initial guess Wh for a bucket h is formed as follows: set Wh(i, j) to the frequency of base i among the jth positions of all l-mers in h.

)|Pr(

),|Pr( *

PS

PWS

Page 156: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

How to search motif space?

Start from random sample strings

Search motif space for the star

Page 157: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Search small neighborhoods

Page 158: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Exhaustive local search

A lot of work, most of it unecessary

Page 159: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Best Neighbor

Branch from the seed strings

Find best neighbor - highest score

Don’t consider branches where the upper bound is not as good as best score so far

Page 160: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Scoring

• PatternBranching use total distance score:• For each sequence Si in the sample S = {S1, . . . , Sn},

letd(A, Si) = min{d(A, P) | P Si}.

• Then the total distance of A from the sample isd(A, S) = d(A, Si).

• For a pattern A, let D=Neighbor(A) be the set of patterns which differ from A in exactly 1 position.

• We define BestNeighbor(A) as the pattern B D=Neighbor(A) with lowest total distance d(B, S).

S Si

Page 161: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

PatternBranching Algorithm

Page 162: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

PatternBranching Performance

• PatternBranching is faster than other pattern-based algorithms

• Motif Challenge Problem: • sample of n = 20 sequences• N = 600 nucleotides long• implanted pattern of length l = 15 • k = 4 mutations

Page 163: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

PMS (Planted Motif Search)

• Generate all possible l-mers out of the input sequence Si. Let Ci be the collection of these l-mers.

• Example:

AAGTCAGGAGT

Ci = 3-mers:

AAG AGT GTC TCA CAG AGG GGA GAG AGT

Page 164: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

All patterns at Hamming distance d = 1 AAGTCAGGAGT

AAG AGT GTC TCA CAG AGG GGA GAG AGT

CAG CGT ATC ACA AAG CGG AGA AAG CGT

GAG GGT CTC CCA GAG TGG CGA CAG GGT

TAG TGT TTC GCA TAG GGG TGA TAG TGT

ACG ACT GAC TAA CCG ACG GAA GCG ACT

AGG ATT GCC TGA CGG ATG GCA GGG ATT

ATG AAT GGC TTA CTG AAG GTA GTG AAT

AAC AGA GTA TCC CAA AGA GGC GAA AGA

AAA AGC GTG TCG CAC AGT GGG GAC AGC

AAT AGG GTT TCT CAT AGC GGT GAT AGG

Page 165: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Sort the lists

AAG AGT GTC TCA CAG AGG GGA GAG AGT

AAA AAT ATC ACA AAG AAG AGA AAG AAT

AAC ACT CTC CCA CAA ACG CGA CAG ACT

AAT AGA GAC GCA CAC AGA GAA GAA AGA

ACG AGC GCC TAA CAT AGC GCA GAC AGC

AGG AGG GGC TCC CCG AGT GGC GAT AGG

ATG ATT GTA TCG CGG ATG GGG GCG ATT

CAG CGT GTG TCT CTG CGG GGT GGG CGT

GAG GGT GTT TGA GAG GGG GTA GTG GGT

TAG TGT TTC TTA TAG TGG TGA TAG TGT

Page 166: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Eliminate duplicates

AAG AGT GTC TCA CAG AGG GGA GAG AGT

AAA AAT ATC ACA AAG AAG AGA AAG AAT

AAC ACT CTC CCA CAA ACG CGA CAG ACT

AAT AGA GAC GCA CAC AGA GAA GAA AGA

ACG AGC GCC TAA CAT AGC GCA GAC AGC

AGG AGG GGC TCC CCG AGT GGC GAT AGG

ATG ATT GTA TCG CGG ATG GGG GCG ATT

CAG CGT GTG TCT CTG CGG GGT GGG CGT

GAG GGT GTT TGA GAG GGG GTA GTG GGT

TAG TGT TTC TTA TAG TGG TGA TAG TGT

Page 167: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

Find motif common to all lists

• Follow this procedure for all sequences• Find the motif common all Li (once duplicates have been

eliminated)• This is the planted motif

Page 168: Probabilities and Probabilistic Models

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info

PMS Running Time

• It takes time to• Generate variants • Sort lists• Find and eliminate duplicates

(m denotes the number of different l-mers which are in the first DNA sequence)

• Running time of this algorithm:

w is the word length of the computer

d

d

lmO 3

w

ld

lmnO d3