Stochastic block model

From HandWiki
Revision as of 15:14, 6 February 2024 by ScienceGen (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation has been firstly introduced in 1983 in the field of social network by Paul W. Holland et al.[1] The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.

Definition

The stochastic block model takes the following parameters:

  • The number [math]\displaystyle{ n }[/math] of vertices;
  • a partition of the vertex set [math]\displaystyle{ \{1,\ldots,n\} }[/math] into disjoint subsets [math]\displaystyle{ C_1,\ldots,C_r }[/math], called communities;
  • a symmetric [math]\displaystyle{ r \times r }[/math] matrix [math]\displaystyle{ P }[/math] of edge probabilities.

The edge set is then sampled at random as follows: any two vertices [math]\displaystyle{ u \in C_i }[/math] and [math]\displaystyle{ v \in C_j }[/math] are connected by an edge with probability [math]\displaystyle{ P_{ij} }[/math]. An example problem is: given a graph with [math]\displaystyle{ n }[/math] vertices, where the edges are sampled as described, recover the groups [math]\displaystyle{ C_1,\ldots,C_r }[/math].

Special cases

An example of the assortative case for the stochastic block model.

If the probability matrix is a constant, in the sense that [math]\displaystyle{ P_{ij} = p }[/math] for all [math]\displaystyle{ i,j }[/math], then the result is the Erdős–Rényi model [math]\displaystyle{ G(n,p) }[/math]. This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model.

The planted partition model is the special case that the values of the probability matrix [math]\displaystyle{ P }[/math] are a constant [math]\displaystyle{ p }[/math] on the diagonal and another constant [math]\displaystyle{ q }[/math] off the diagonal. Thus two vertices within the same community share an edge with probability [math]\displaystyle{ p }[/math], while two vertices in different communities share an edge with probability [math]\displaystyle{ q }[/math]. Sometimes it is this restricted model that is called the stochastic block model. The case where [math]\displaystyle{ p \gt q }[/math] is called an assortative model, while the case [math]\displaystyle{ p \lt q }[/math] is called disassortative.

Returning to the general stochastic block model, a model is called strongly assortative if [math]\displaystyle{ P_{ii} \gt P_{jk} }[/math] whenever [math]\displaystyle{ j \neq k }[/math]: all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if [math]\displaystyle{ P_{ii} \gt P_{ij} }[/math] whenever [math]\displaystyle{ i \neq j }[/math]: each diagonal entry is only required to dominate the rest of its own row and column.[2] Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.[2]

Typical statistical tasks

Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.

Detection

The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similar Erdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.[3]

Partial recovery

In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.[4]

Exact recovery

In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known[5] or unknown.[6]

Statistical lower bounds and threshold behavior

Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds.[7][3][8] Suppose that we allow the size [math]\displaystyle{ n }[/math] of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as [math]\displaystyle{ n }[/math] increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used.

For partial recovery, the appropriate scaling is to take [math]\displaystyle{ P_{ij} = \tilde P_{ij} / n }[/math] for fixed [math]\displaystyle{ \tilde P }[/math], resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix [math]\displaystyle{ P = \left(\begin{array}{cc} \tilde p/n & \tilde q/n \\ \tilde q/n & \tilde p/n \end{array} \right), }[/math] partial recovery is feasible[4] with probability [math]\displaystyle{ 1 - o(1) }[/math] whenever [math]\displaystyle{ (\tilde p - \tilde q)^2 \gt 2(\tilde p + \tilde q) }[/math], whereas any estimator fails[3] partial recovery with probability [math]\displaystyle{ 1-o(1) }[/math] whenever [math]\displaystyle{ (\tilde p - \tilde q)^2 \lt 2(\tilde p + \tilde q) }[/math].

For exact recovery, the appropriate scaling is to take [math]\displaystyle{ P_{ij} = \tilde P_{ij} \log n / n }[/math], resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with [math]\displaystyle{ r }[/math] equal-sized communities, the threshold lies at [math]\displaystyle{ \sqrt{\tilde p} - \sqrt{\tilde q} = \sqrt{r} }[/math]. In fact, the exact recovery threshold is known for the fully general stochastic block model.[5]

Algorithms

In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case.

However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices,[9][4][5][10] semidefinite programming,[2][8] forms of belief propagation,[7][11] and community detection[12] among others.

Variants

Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition.[5] More significant variants include the degree-corrected stochastic block model,[13] the hierarchical stochastic block model,[14] the geometric block model,[15] censored block model and the mixed-membership block model.[16]

Topic models

Stochastic block model have been recognised to be a topic model on bipartite networks.[17] In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.

Extensions to signed graphs

Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models. [18]

DARPA/MIT/AWS Graph Challenge: streaming stochastic block partition

GraphChallenge[19] encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017. [20] Spectral clustering has demonstrated outstanding performance compared to the original and even improved[21] base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.[22] [23]

See also

References

  1. Holland, Paul W; Laskey, Kathryn Blackmond; Leinhardt, Samuel (1983). "Stochastic blockmodels: First steps". Social Networks 5 (2): 109–137. doi:10.1016/0378-8733(83)90021-7. ISSN 0378-8733. https://doi.org/10.1016/0378-8733(83)90021-7. Retrieved 2021-06-16. 
  2. 2.0 2.1 2.2 Amini, Arash A.; Levina, Elizaveta (June 2014). "On semidefinite relaxations for the block model". arXiv:1406.5647 [cs.LG].
  3. 3.0 3.1 3.2 Mossel, Elchanan; Neeman, Joe; Sly, Allan (February 2012). "Stochastic Block Models and Reconstruction". arXiv:1202.1499 [math.PR].
  4. 4.0 4.1 4.2 Massoulie, Laurent (November 2013). "Community detection thresholds and the weak Ramanujan property". arXiv:1311.3085 [cs.SI].
  5. 5.0 5.1 5.2 5.3 Abbe, Emmanuel; Sandon, Colin (March 2015). "Community detection in general stochastic block models: fundamental limits and efficient recovery algorithms". arXiv:1503.00609 [math.PR].
  6. Abbe, Emmanuel; Sandon, Colin (June 2015). "Recovering communities in the general stochastic block model without knowing the parameters". arXiv:1506.03729 [math.PR].
  7. 7.0 7.1 Decelle, Aurelien; Krzakala, Florent; Moore, Cristopher; Zdeborová, Lenka (September 2011). "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications". Physical Review E 84 (6): 066106. doi:10.1103/PhysRevE.84.066106. PMID 22304154. Bibcode2011PhRvE..84f6106D. 
  8. 8.0 8.1 Abbe, Emmanuel; Bandeira, Afonso S.; Hall, Georgina (May 2014). "Exact Recovery in the Stochastic Block Model". arXiv:1405.3267 [cs.SI].
  9. Krzakala, Florent; Moore, Cristopher; Mossel, Elchanan; Neeman, Joe; Sly, Allan; Lenka, Lenka; Zhang, Pan (October 2013). "Spectral redemption in clustering sparse networks". Proceedings of the National Academy of Sciences 110 (52): 20935–20940. doi:10.1073/pnas.1312486110. PMID 24277835. Bibcode2013PNAS..11020935K. 
  10. Lei, Jing; Rinaldo, Alessandro (February 2015). "Consistency of spectral clustering in stochastic block models". The Annals of Statistics 43 (1): 215–237. doi:10.1214/14-AOS1274. ISSN 0090-5364. 
  11. Mossel, Elchanan; Neeman, Joe; Sly, Allan (September 2013). "Belief Propagation, Robust Reconstruction, and Optimal Recovery of Block Models". The Annals of Applied Probability 26 (4): 2211–2256. doi:10.1214/15-AAP1145. Bibcode2013arXiv1309.1380M. 
  12. Fathi, Reza (April 2019). "Efficient Distributed Community Detection in the Stochastic Block Model". arXiv:1904.07494 [cs.DC].
  13. Karrer, Brian; Newman, Mark E J (2011). "Stochastic blockmodels and community structure in networks". Physical Review E 83 (1): 016107. doi:10.1103/PhysRevE.83.016107. PMID 21405744. Bibcode2011PhRvE..83a6107K. https://link.aps.org/doi/10.1103/PhysRevE.83.016107. Retrieved 2021-06-16. 
  14. Peixoto, Tiago (2014). "Hierarchical block structures and high-resolution model selection in large networks". Physical Review X 4 (1): 011047. doi:10.1103/PhysRevX.4.011047. Bibcode2014PhRvX...4a1047P. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.011047. Retrieved 2021-06-16. 
  15. Galhotra, Sainyam; Mazumdar, Arya; Pal, Soumyabrata; Saha, Barna (February 2018). "The Geometric Block Model". AAAI 32. doi:10.1609/aaai.v32i1.11905. 
  16. Airoldi, Edoardo; Blei, David; Feinberg, Stephen; Xing, Eric (May 2007). "Mixed membership stochastic blockmodels". Journal of Machine Learning Research 9: 1981–2014. PMID 21701698. Bibcode2007arXiv0705.4485A. 
  17. Martin Gerlach; Tiago Peixoto; Eduardo Altmann (2018). "A network approach to topic models". Science Advances 4 (7): eaaq1360. doi:10.1126/sciadv.aaq1360. PMID 30035215. Bibcode2018SciA....4.1360G. 
  18. Alyson Fox; Geoffrey Sanders; Andrew Knyazev (2018). "Investigation of Spectral Clustering for Signed Graph Matrix Representations". 2018 IEEE High Performance extreme Computing Conference (HPEC). pp. 1–7. doi:10.1109/HPEC.2018.8547575. ISBN 978-1-5386-5989-2. 
  19. [1] DARPA/MIT/AWS Graph Challenge
  20. [2] DARPA/MIT/AWS Graph Challenge Champions
  21. A. J. Uppal; J. Choi; T. B. Rolinger; H. Howie Huang (2021). "Faster Stochastic Block Partition Using Aggressive Initial Merging, Compressed Representation, and Parallelism Control". 2021 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–7. doi:10.1109/HPEC49654.2021.9622836. ISBN 978-1-6654-2369-4. 
  22. David Zhuzhunashvili; Andrew Knyazev (2017). "Preconditioned spectral clustering for stochastic block partition streaming graph challenge". 2017 IEEE High Performance Extreme Computing Conference (HPEC). doi:10.1109/HPEC.2017.8091045. 
  23. Lisa Durbeck; Peter Athanas (2020). "Incremental Streaming Graph Partitioning". 2020 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–8. doi:10.1109/HPEC43674.2020.9286181. ISBN 978-1-7281-9219-2.