Balls into bins problem

From HandWiki
(Redirected from Balls into bins)
Short description: Balanced or random resource allocation

The balls into bins (or balanced allocations) problem is a classic problem in probability theory that has many applications in computer science. The problem involves m balls and n boxes (or "bins"). Each time, a single ball is placed into one of the bins. After all balls are in the bins, we look at the number of balls in each bin; we call this number the load on the bin. The problem can be modelled using a Multinomial distribution, and may involve asking a question such as: What is the expected number of bins with a ball in them?[1]

Obviously, it is possible to make the load as small as m/n by putting each ball into the least loaded bin. The interesting case is when the bin is selected at random, or at least partially at random. A powerful balls-into-bins paradigm is the "power of two random choices[2]" where each ball chooses two (or more) random bins and is placed in the lesser-loaded bin. This paradigm has found wide practical applications in shared-memory emulations, efficient hashing schemes, randomized load balancing of tasks on servers, and routing of packets within parallel networks and data centers.[2]

Random allocation

When the bin for each ball is selected at random, independent of other choices, the maximum load might be as large as [math]\displaystyle{ m }[/math]. However, it is possible to calculate a tighter bound that holds with high probability. A "high probability" is a probability [math]\displaystyle{ 1-o(1) }[/math], i.e. the probability tends to [math]\displaystyle{ 1 }[/math] when [math]\displaystyle{ n }[/math] grows to infinity.

For the case [math]\displaystyle{ m=n }[/math], with probability [math]\displaystyle{ 1 - o(1) }[/math] the maximum load is: [3] [4]

[math]\displaystyle{ \frac{\log n}{\log \log n}\cdot(1+o(1)) }[/math].

Gonnet [5] gave a tight bound for the expected value of the maximum load, which for [math]\displaystyle{ m=n }[/math] is [math]\displaystyle{ \Gamma^{-1}(n) - \frac{3}{2} + o(1) }[/math], where [math]\displaystyle{ \Gamma^{-1} }[/math] is the inverse of the gamma function, and it is known[6] that [math]\displaystyle{ \Gamma^{-1}(n) = \frac{\log n}{\log\log n} \cdot (1 + o(1)) }[/math].

The maximum load can also be calculated for [math]\displaystyle{ m \ne n }[/math], and for example, for [math]\displaystyle{ m \gt n \log n }[/math] it is [math]\displaystyle{ \frac{m}{n}+\Theta\left(\sqrt{\frac{m \log n}{n}}\right) }[/math], and for [math]\displaystyle{ m \lt n/\log n }[/math] it is [math]\displaystyle{ \Theta\left(\frac{\log n}{\log(n/m)}\right) }[/math], with high probability.[7]

Exact probabilities for small [math]\displaystyle{ m = n }[/math] can be computed as [math]\displaystyle{ a(n)/n^n }[/math] for [math]\displaystyle{ a(n) }[/math] defined in OEIS A208250.

Partially random allocation

Instead of just selecting a random bin for each ball, it is possible to select two or more bins for each ball and then put the ball in the least loaded bin. This is a compromise between a deterministic allocation, in which all bins are checked and the least loaded bin is selected, and a totally random allocation, in which a single bin is selected without checking other bins. This paradigm often called the "power of two random choices" has been studied in a number of settings below.[2]

In the simplest case, if one allocates [math]\displaystyle{ m }[/math] balls into [math]\displaystyle{ n }[/math] bins (with [math]\displaystyle{ m=n }[/math]) sequentially one by one, and for each ball one chooses [math]\displaystyle{ d \ge 2 }[/math] random bins at each step and then allocates the ball into the least loaded of the selected bins (ties broken arbitrarily), then with high probability the maximum load is:[8]

[math]\displaystyle{ \frac{\log \log n}{\log d}+\Theta(1) }[/math]

which is almost exponentially less than with totally random allocation.

This result can be generalized to the case [math]\displaystyle{ m \ge n }[/math] (with [math]\displaystyle{ d \ge 2 }[/math]), when with high probability the maximum load is:[9]

[math]\displaystyle{ \frac{m}{n}+\frac{\log \log n}{\log d}+\Theta(1) }[/math]

which is tight up to an additive constant. (All the bounds hold with probability at least [math]\displaystyle{ 1 - 1/n^c }[/math] for any constant [math]\displaystyle{ c\gt 0 }[/math].) Note that for [math]\displaystyle{ m \gt n \log n }[/math], the random allocation process gives only the maximum load of [math]\displaystyle{ \frac{m}{n}+O\left(\log \log n\right) }[/math] with high probability, so the improvement between these two processes is especially visible for large values of [math]\displaystyle{ m }[/math].

Other key variants of the paradigm are "parallel balls-into-bins" where [math]\displaystyle{ n }[/math] balls choose [math]\displaystyle{ d }[/math] random bins in parallel,[10] "weighted balls-into-bins" where balls have non-unit weights,[11][12][13] and "balls-into-bins with deletions" where balls can be added as well as deleted.[2][14]

Infinite stream of balls

Instead of just putting m balls, it is possible to consider an infinite process in which, at each time step, a single ball is added and a single ball is taken, such that the number of balls remains constant. For m=n, after a sufficiently long time, with high probability the maximum load is similar to the finite version, both with random allocation and with partially random allocation.[8]

Repeated balls-into-bins

In a repeated variant of the process, [math]\displaystyle{ m }[/math] balls are initially distributed in [math]\displaystyle{ n }[/math] bins in an arbitrary way and then, in every subsequent step of a discrete-time process, one ball is chosen from each non-empty bin and re-assigned to one of the [math]\displaystyle{ n }[/math] bins uniformly at random. When [math]\displaystyle{ m=n }[/math], it has been shown that with high probability the process converges to a configuration with maximum load [math]\displaystyle{ \mathcal O(\log(n)) }[/math] after [math]\displaystyle{ \mathcal O(n) }[/math] steps.[15]

Applications

Online Load Balancing:[16] consider a set of n identical computers. There are n users who need computing services. The users are not coordinated - each users comes on his own and selects which computer to use. Each user would of course like to select the least loaded computer, but this requires to check the load on each computer, which might take a long time. Another option is to select a computer at random; this leads, with high probability, to a maximum load of [math]\displaystyle{ \frac{\log n}{\log \log n} \cdot (1 + o(1)) }[/math]. A possible compromise is that the user will check only two computers, and use the lesser loaded of the two. This leads, with high probability, to a much smaller maximum load of [math]\displaystyle{ \log_2 \log n + \Theta(1) }[/math].

Hashing: consider a hash table in which all keys mapped to the same location are stored in a linked list. The efficiency of accessing a key depends on the length of its list. If we use a single hash function which selects locations with uniform probability, with high probability the longest chain has [math]\displaystyle{ O\left(\frac{\log n}{\log \log n}\right) }[/math] keys. A possible improvement is to use two hash functions, and put each new key in the shorter of the two lists. In this case, with high probability the longest chain has only [math]\displaystyle{ O(\log \log n) }[/math] elements.[17]

Fair cake-cutting: consider the problem of creating a partially proportional division of a heterogeneous resource among [math]\displaystyle{ n }[/math] people, such that each person receives a part of the resource which that person values as at least [math]\displaystyle{ 1/an }[/math] of the total, where [math]\displaystyle{ a }[/math] is some sufficiently large constant. The Edmonds–Pruhs protocol is a randomized algorithm whose analysis make use of balls-into-bins arguments.[18]

References

  1. Oliveira, Rafael (May 20, 2021). "Lecture 4: Balls & Bins". https://cs.uwaterloo.ca/~r5olivei/courses/2021-spring-cs466/lecture04.pdf. 
  2. 2.0 2.1 2.2 2.3 Mitzenmacher, Michael; Richa, Andrea; Sitaraman, Ramesh (July 2001). "The power of two random choices: A survey of techniques and results". Handbook of Randomized Computing (Kluwer Press) 1: 255–305. 
  3. Kolchin, Valentin F. (1978). Random allocations. Washington: Winston [usw.]. ISBN 978-0470993941. 
  4. Kotz, Samuel; Johnson, Norman Lloyd (1977). Urn models and their applications. New York, NY: John Wiley & Sons. ISBN 978-0471446309. 
  5. Gonnet, Gaston H. (1981). "Expected Length of the Longest Probe Sequence in Hash Code Searching". Journal of the Association for Computing Machinery 28 (2): 289–304. doi:10.1145/322248.322254. 
  6. Devroye, Luc (1985). "The expected length of the longest probe sequence for bucket searching when the distribution is not uniform". Journal of Algorithms 6 (1): 1–9. doi:10.1016/0196-6774(85)90015-X. 
  7. Raab, Martin (1998). ""Balls into Bins" — A Simple and Tight Analysis". Randomization and Approximation Techniques in Computer Science. Lecture Notes in Computer Science. 1518. pp. 159–170. doi:10.1007/3-540-49543-6_13. ISBN 978-3-540-65142-0. 
  8. 8.0 8.1 Azar, Yossi; Broder, Andrei Z.; Karlin, Anna R.; Upfal, Eli (1999). "Balanced Allocations". SIAM Journal on Computing 29 (1): 180–200. doi:10.1137/s0097539795288490. 
  9. Berenbrink, Petra; Czumaj, Artur; Steger, Angelika; Vöcking, Berthold (2006). "Balanced Allocations: The Heavily Loaded Case". SIAM Journal on Computing 35 (6): 180–200. doi:10.1137/S009753970444435X. 
  10. Berenbrink, Petra; Czumaj, Artur; Englert, Matthias; Friedetzky, Tom; Nagel, Lars (2012). "Multiple-choice balanced allocation in (almost) parallel". APPROX 2012, RANDOM 2012: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. pp. 411–422. doi:10.1007/978-3-642-32512-0_35. 
  11. Talwar, Kunal; Udi, Wieder (2007). "Balanced Allocations: The Weighted Case". Proceedings on 39th Annual ACM Symposium on Theory of Computing (STOC). pp. 256–265. doi:10.1145/1250790.1250829. 
  12. Berenbrink, Petra; Friedetzky, Tom; Zengjian, Hu; Russell, Martin (2005). "On weighted balls-into-bins games". STACS. pp. 231–243. doi:10.1007/978-3-540-31856-9_19. 
  13. Peres, Yuval; Talwar, Kunal; Wieder, Udi (2015). "Graphical balanced allocations and the [math]\displaystyle{ (1+\beta) }[/math]-choice process". 760–775. doi:10.1002/rsa.20558. 
  14. Cole, Richard; Frieze, Alan; Maggs, Bruce M.; Mitzenmacher, Michael; Richa, Andrea; Sitaraman, Ramesh; Upfal, Eli (1998). "On balls and bins with deletions". Randomization and approximation techniques in computer science (Barcelona, 1998). pp. 145–158. doi:10.1007/3-540-49543-6_12. 
  15. Becchetti, LucaBecchetti; Clementi, Andrea; Natale, Emanuele; Pasquale, Francesco; Posta, Gustavo (2015-06-13). "Self-Stabilizing Repeated Balls-into-Bins". Proceedings of the 27th ACM symposium on Parallelism in Algorithms and Architectures. SPAA '15. Portland, Oregon, USA: Association for Computing Machinery. pp. 332–339. doi:10.1145/2755573.2755584. ISBN 978-1-4503-3588-1. https://doi.org/10.1145/2755573.2755584. 
  16. Raab, Martin; Steger, Angelika (1998). ""Balls into Bins" — A Simple and Tight Analysis". in Luby, Michael; Rolim, José D. P.; Serna, Maria (in en). Randomization and Approximation Techniques in Computer Science. Lecture Notes in Computer Science. 1518. Berlin, Heidelberg: Springer. pp. 159–170. doi:10.1007/3-540-49543-6_13. ISBN 978-3-540-49543-7. https://link.springer.com/chapter/10.1007%2F3-540-49543-6_13. 
  17. Karp, R. M. (1996). "Efficient PRAM simulation on a distributed memory machine". Algorithmica 16 (4–5): 517–542. doi:10.1007/bf01940878. 
  18. Edmonds, Jeff; Pruhs, Kirk (2006). "Balanced Allocations of Cake". 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). pp. 623–634. doi:10.1109/FOCS.2006.17. ISBN 0-7695-2720-5. https://people.cs.pitt.edu/~kirk/papers/focs2006.pdf.