1 of 56

Dependency networks

Sep 23rd 2021

BMI 826-23 Computational Network Biology�Fall 2021

Sushmita Roy

https://compnetbiocourse.discovery.wisc.edu

2 of 56

Plan for this section

  • Overview of network inference (Sep 21st)
  • Bayesian networks (Sep 21st, Sep 23rd)
  • Dependency networks (Sep 23rd)
  • Integrative inference of gene regulatory networks (Sep 28th, Sep 30th)
  • Inference of regulatory networks from single cell datasets (Oct 5th)

3 of 56

Readings

  • Huynh-Thu, Vân A., Alexandre Irrthum, Louis Wehenkel, and Pierre Geurts. 2010. “Inferring Regulatory Networks from Expression Data Using Tree-Based Methods.” PLOS ONE 5 (9): e12776+. https://doi.org/10.1371/journal.pone.0012776.

4 of 56

Goals for today

  • Dependency networks
  • GENIE3: GEne Network Inference with Ensemble of trees
  • Evaluation of expression-based network inference methods

5 of 56

Recall the different types of probabilistic graphs

  • In each graph type we can assert different conditional independencies
  • Correlation networks
  • Markov networks
    • Gaussian Graphical models
  • Dependency networks
  • Bayesian networks

6 of 56

Dependency network

  • A type of probabilistic graphical model
  • Approximate Markov networks
    • Are much easier to learn from data
  • As in Bayesian networks has
    • Graph structure
    • Parameters
  • Unlike Bayesian network
    • Can have cyclic dependencies
    • Computing a joint probability is harder
      • It is approximated with a “pseudo” likelihood.

Dependency Networks for Inference, Collaborative Filtering and Data visualization

Heckerman, Chickering, Meek, Rounthwaite, Kadie 2000

7 of 56

Original motivation of dependency networks

  • Introduced by Heckerman, Chickering, Meek, et al 2000
  • Often, Bayesian networks can get confusing
    • Learned Bayesian networks represent correlation or predictive relationships
    • But the directionality of the edges are mistakenly interpreted as causal connections
  • (Consistent) Dependency networks were introduced to distinguish between these cases

8 of 56

Dependency network vs Bayesian network

Dependency Networks for Inference, Collaborative Filtering and Data visualization

Heckerman, Chickering, Meek, Rounthwaite, Kadie 2000

Bayesian network

Dependency network

Often times, the Bayesian network on the left is read as if “Age” determines “Income”. However, all this model is capturing is that “Age” is predictive of “Income”.

9 of 56

Learning dependency networks

?

?

?

Bj

  • Let Bj denote the Markov Blanket of a variable Xj.
  • X-j denotes all variables other than Xj
  • Given Bj, Xj is independent of all other variables, X-j

  • Bj can be estimated by finding the set of variables that best predict Xj
  • This requires us to specify the form of P(Xj|Bj)

fj=P(Xj|Bj)

Entails estimating the Markov blanket of each random variable

Xj

10 of 56

Different representations of fj=P(Xj|Bj)

  • If Xj is continuous
    • fj can be a linear function
    • fj can be a regression tree
    • fj can be an ensemble of trees
      • E.g. random forests

  • If Xj is discrete
    • fj can be a conditional probability table
    • fj can be a conditional probability tree

11 of 56

Popular dependency networks implementations

  • Learned by solving a set of linear regression problems
      • TIGRESS (Haury et al, 2010)
        • Uses a constraint to learn a “sparse” Markov blanket
        • Uses “stability selection” to estimate confidence of edges
  • Learned by solving a set of non-linear regression problems
    • Non-linearity captured by Regression Tree (Heckerman et al, 2000)
    • GENIE3: Non-linearity captured by Random forest (Huynh-Thu et al, 2010)
    • Inferelator (Bonneau et al, Genome Biology 2005)
      • Can handle time course and single time point data
      • Non-linear regression is done using a logistic transform
      • Handles linear and non-linear regression

12 of 56

Goals for today

  • Dependency networks
  • GENIE3: GEne Network Inference with Ensemble of trees
  • Evaluation of expression-based network inference methods

13 of 56

Expression data matrix

p Genes

E1

E2

E3

E4

E5

E6

E7

ENSMUSG00000028180

8.82

9.09

6.43

8.11

7.13

7.55

9.18

ENSMUSG00000053211

0.0

0.15

0.07

0.0

0.08

0.0

0.0

ENSMUSG00000028182

0.0

0.0

0.0

0.0

0.1

0.08

0.26

ENSMUSG00000002017

2.83

1.92

2.33

0.86

2.17

2.53

3.19

ENSMUSG00000028184

2.0

1.32

1.13

0.72

1.25

1.17

2.27

ENSMUSG00000028187

12.41

10.72

10.23

8.59

8.08

8.92

11.61

ENSMUSG00000028186

0.02

0.68

0.0

0.0

0.0

0.0

0.0

ENSMUSG00000028189

0.69

0.95

1.09

0.97

0.71

0.44

0.76

ENSMUSG00000028188

0.11

0.22

0.12

0.21

0.24

0.2

0.43

N Experiments/Time points

Observations (expression levels) of all variables in sample i, x(i)

Observations of variable Xj in all N experiments

 

14 of 56

GENIE3: GEne Network Inference with Ensemble of trees

  • Solves a set of regression problems
    • One per random variable
    • Minimizes the prediction error per variable Xj

  • Uses an Ensemble of regression trees to represent fj
    • Models non-linear dependencies
  • Outputs a directed cyclic graph with a confidence of each edge
    • Directionality means “good predictor”
  • Focus on a ranking over edges rather than graph structure and parameters
    • Rank is determined by confidence

Inferring Regulatory Networks from Expression Data Using Tree-Based Methods Van Anh Huynh-Thu, Alexandre Irrthum, Louis Wehenkel, Pierre Geurts, Plos One 2010

15 of 56

A very simple regression tree for two variables

X2

X3

e1

e2

X2 > e1

X2 > e2

YES

NO

NO

YES

X3

The tree specifies X3 as a function of X2

Interior nodes

Leaf nodes

16 of 56

Prediction of Xj using a single tree

split nodes

leaf nodes

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

Prediction is the mean at each leaf node

17 of 56

Prediction of Xj using a single tree

split nodes

leaf nodes

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

Prediction is the mean at each leaf node

18 of 56

Prediction of Xj using a single tree

split nodes

leaf nodes

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

Prediction is the mean at each leaf node

19 of 56

Prediction of Xj using a single tree

split nodes

leaf nodes

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

Prediction is the mean at each leaf node

20 of 56

Prediction of Xj using a single tree

split nodes

leaf nodes

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

Prediction is the mean at each leaf node

21 of 56

Algorithm for learning a regression tree

  •  

22 of 56

Quantifying a split on the tree

  •  

The set of samples in the left node

The set of samples in the right node

Sleft and Sright are sets of samples obtained by testing Xi for a particular split s

23 of 56

One iteration of regression tree learning

  • Let X={X1,X2,X3,X4}
  • Assume we are searching for the neighbors of X3 and it already has two neighbors X1 and X2
  • X1, X2, X4 will all be considered as candidate splits using the examples at each current leaf node
  • If we split on X4, then we will have a new neighbor.

24 of 56

One iteration of regression tree learning

  • Let X={X1,X2,X3,X4}
  • Assume we are searching for the neighbors of X3 and it already has two neighbors X1 and X2
  • X1, X2, X4 will all be considered as candidate splits using the examples at each current leaf node
  • If we split on X4, then we will have a new neighbor.

X1 > e1

X2 > e2

YES

NO

NO

YES

N1

N2

N3

Nl: Gaussian associated with leaf l

Iteration i

25 of 56

One iteration of regression tree learning

  • Let X={X1,X2,X3,X4}
  • Assume we are searching for the neighbors of X3 and it already has two neighbors X1 and X2
  • X1, X2, X4 will all be considered as candidate splits using the examples at each current leaf node
  • If we split on X4, then we will have a new neighbor.

NO

X1 > e1

X2 > e2

YES

NO

NO

YES

N1

N2

N3

Nl: Gaussian associated with leaf l

Iteration i

X1 > e1

X2 > e2

YES

NO

NO

YES

X4 > e3

YES

N2

N3

N4

N5

Iteration i+1

26 of 56

Convergence criteria

  • Minimum number of examples at a leaf node
  • Depth of a tree
  • Error tolerance

27 of 56

An Ensemble of trees

  • A single tree is prone to “overfitting”
  • Instead of learning a single tree, ensemble models make use of a collection of trees

28 of 56

Prediction using an Ensemble of Trees

……

tree1

treeT

split nodes

leaf nodes

x-j

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

29 of 56

Prediction using an Ensemble of Trees

……

tree1

treeT

split nodes

leaf nodes

x-j

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

30 of 56

Prediction using an Ensemble of Trees

……

tree1

treeT

split nodes

leaf nodes

x-j

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

31 of 56

Prediction using an Ensemble of Trees

……

tree1

treeT

split nodes

leaf nodes

x-j

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

32 of 56

Prediction using an Ensemble of Trees

    • Prediction is

……

tree1

treeT

split nodes

leaf nodes

x-j

x-j

Taken from ICCV09 tutorial by Kim, Shotton and Stenger: http://www.iis.ee.ic.ac.uk/~tkkim/iccv09_tutorial

33 of 56

Expression data matrix for GENIE3

p Genes

N Experiments/Time points etc

Observations (expression levels) of all variables in sample i

Observations of Xj in all m experiments

GENIE3 will use the transpose of this data matrix

N Experiments

p Genes

Observations (expression levels) of all variables in sample i

34 of 56

GENIE3 algorithm sketch

Figure from Huynh-Thu et al., 2010

Predictor ranking

35 of 56

GENIE3 algorithm sketch

  • For each Xj, generate learning samples of input/output pairs
    • LSj={(x-jk,xjk), k=1..N}
    • On each LSj learn fj to predict the value of Xj
    • fj is an ensemble of regression trees
    • Estimate importance wij for all genes i ≠ j
      • wij quantifies the confidence of the edge between Xi and Xj
      • Associated with the decrease in variance of Xj when Xi is included in fj
  • Generate a global ranking of edges based on each wij

36 of 56

Learning fj in GENIE3

  • Uses two types of Ensembles to represent the fj:
    • Random forest or Extra Trees
  • Learning the Random forest
    • Generate M=1000 bootstrap samples and learn 1000 trees, one from each sample
    • At each node to be split, search for best split among K randomly selected variables
    • K was set to p-1 or (p-1)1/2, where p is the number of regulators/parents
  • Learning the Extra-Trees
    • Learn 1000 trees
    • Each tree is built from the original learning sample
    • At each node, the best split is determined among K random splits, each split determined by randomly selecting one input (without replacement) and a threshold

37 of 56

Computing the importance weight of a predictor

  • Importance is computed at each interior node
    • Remember each predictor can show up multiple times as interior nodes
  • For an interior node, importance is given by the reduction in variance when splitting on that node

Interior node

Set of data samples that reach this node

#S: Size of the set S

Var(S): variance of the output variable xj in set S

St: subset of S when a test at N is true

Sf: subset of S when a test at N is false

38 of 56

Computing the importance weight of a predictor

  • For a single tree the overall importance is then sum over all points in the tree where this node is used to split
  • For an ensemble the importance is averaged over all trees
  • To avoid bias towards highly variable genes, normalize the expression genes to all have unit variance

39 of 56

Goals for today

  • Dependency networks
  • GENIE3: GEne Network Inference with Ensemble of trees
  • Evaluation of expression-based network inference methods

40 of 56

Evaluating the network

  • Assessing confidence
  • Area under the precision recall curve
  • F-score
  • Do modules or target sets of genes participate in coherent function?
  • Can the network predict expression in a new condition?

41 of 56

Assessing confidence in the learned network

  • Typically the number of training samples is not sufficient to reliably determine the “right” network
  • One can however estimate the confidence of specific features of the network
    • Graph features f(G)
  • Examples of f(G)
    • An edge between two random variables
    • Order relations: Is X, Y’s ancestor?

42 of 56

How to assess confidence in graph features?

  • What we want is P(f(G)|D), which is

  • But it is not feasible to compute this sum

  • Instead we will use a “bootstrap” procedure

43 of 56

Bootstrap to assess graph feature confidence

  • For i=1 to m
    • Construct dataset Di by sampling with replacement N samples from dataset D, where N is the size of the original D
    • Learn a graphical model {Gi,Θi}
  • For each feature of interest f, calculate confidence

44 of 56

Bootstrap/stability selection

44

44

Expression data

Final inferred�network

Output

Input

45 of 56

Bootstrap/stability selection

45

45

Expression data

Final inferred�network

Output

Input

46 of 56

Does the bootstrap confidence represent real relationships?

  • Compare the confidence distribution to that obtained from randomized data
  • Shuffle the columns of each row (gene) separately
  • Repeat the bootstrap procedure

randomize each

row independently

genes

Experimental conditions

Slide credit Prof. Mark Craven

47 of 56

Bootstrap-based confidence differs between real and random data

f

f

Random

Real

Friedman et al 2000

48 of 56

Example of a high confidence sub-network

One learned Bayesian network

Bootstrapped confidence Bayesian network: highlights a subnetwork associated with yeast mating pathway. Colors indicate genes with known functions.

Nir Friedman, Science 2004

49 of 56

Area under the precision recall curve (AUPR)

  • Assume we know what the “right” network is
  • One can use Precision-Recall curves to evaluate the predicted network
  • Area under the PR curve (AUPR) curve quantifies performance

Precision=

# of correct edges

# of predicted edges

Recall=

# of correct edges

# of true edges

50 of 56

F-score

  • Assume we know what the “right” network is
  • Suppose your predicted network has a N edges.
    • Selected based on sorting edges by confidence/strength of interaction and getting the top X edges

Precision (P) =

# of correct edges

# of predicted edges

Recall (R)=

# of correct edges

# of true edges

F-score=

2*P*R

P+ R

51 of 56

Experimental datasets to assess network structure for gene regulatory networks

    • Sequence specific motifs

    • ChIP-chip and ChIP-seq

    • Factor knockout followed by whole-transcriptome profiling

Siahpirani et al., 2016

52 of 56

AUPR based performance comparison

Results are for E. coli transcriptional regulatory network

53 of 56

DREAM: Dialogue for reverse engineeting assessments and methods

Community effort to assess regulatory network inference

DREAM 5 challenge

Previous challenges: 2006, 2007, 2008, 2009, 2010

Marbach et al. 2012, Nature Methods

54 of 56

Where do different methods rank?

Marbach et al., 2012

Community

Random

These approaches were mostly per-gene

55 of 56

Some comments about expression-based network inference methods

  • We have seen multiple types of algorithms to learn these networks
    • Per-gene methods (learn regulators for individual genes)
      • Sparse candidate, GENIE3, ARACNE, CLR
    • Per-module methods
      • Module networks: learn regulators for sets of genes/modules
      • Other implementations of module networks exist
        • LIRNET: Learning a Prior on Regulatory Potential from eQTL Data (Su In Lee et al, Plos genetics 2009, http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1000358)
        • LeMoNe: Learning Module Networks (Michoel et al 2007, http://www.biomedcentral.com/1471-2105/8/S2/S5)
    • Methods that combine per-gene and per-module (MERLIN)
  • Methods differ in
    • how they quantify dependence between genes
    • Higher-order or pairwise
    • Focus on structure or structure & parameters
  • Expression alone is not enough to infer the structure of the network
  • Integrative approaches that combine expression with other types of data are likely more successful (next lectures)

56 of 56

References

  • Markowetz, Florian and Rainer Spang. "Inferring cellular networks-a review.." BMC bioinformatics 8 Suppl 6 (2007): S5+.
  • N. Friedman, M. Linial, I. Nachman, and D. Pe'er, "Using bayesian networks to analyze expression data," Journal of Computational Biology, vol. 7, no. 3-4, pp. 601-620, Aug. 2000. [Online]. Available: http://dx.doi.org/10.1089/106652700750050961
  • Dependency Networks for Inference, Collaborative Filtering and Data visualization Heckerman, Chickering, Meek, Rounthwaite, Kadie 2000
  • Inferring Regulatory Networks from Expression Data Using Tree-Based Methods Van Anh Huynh-Thu, Alexandre Irrthum, Louis Wehenkel, Pierre Geurts, Plos One 2010
  • D. Marbach et al., "Wisdom of crowds for robust gene network inference," Nature Methods, vol. 9, no. 8, pp. 796-804, Jul. 2012. [Online]. Available: http://dx.doi.org/10.1038/nmeth.
  • R. De Smet and K. Marchal, "Advantages and limitations of current network inference methods." Nature reviews. Microbiology, vol. 8, no. 10, pp. 717-729, Oct. 2010. [Online]. Available: http://dx.doi.org/10.1038/nrmicro2419