Bin packing problem
Covering/packingproblem pairs  



The bin packing problem^{[1]}^{[2]}^{[3]}^{[4]} is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in FPGA semiconductor chip design.
Computationally, the problem is NPhard, and the corresponding decision problem  deciding if items can fit into a specified number of bins  is NPcomplete. Despite its worstcase hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often nonoptimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(n log n) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the firstfit decreasing algorithm), although this still does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows firstfit to produce an optimal solution.^{[5]}
There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterised by both a volume and a value, the problem of maximizing the value of items that can fit in the bin is known as the knapsack problem.
A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing^{[6]} since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if the space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.
Another variant of bin packing of interest in practice is the socalled online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This of course requires additional storage for holding the items to be rearranged.
Formal statement
In Computers and Intractability^{[7]}^{:226} Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.
Instance: Finite set [math]\displaystyle{ I }[/math] of items, a size [math]\displaystyle{ s(i) \in \mathbb{Z}^+ }[/math] for each [math]\displaystyle{ i \in I }[/math], a positive integer bin capacity [math]\displaystyle{ B }[/math], and a positive integer [math]\displaystyle{ K }[/math].
Question: Is there a partition of [math]\displaystyle{ I }[/math] into disjoint sets [math]\displaystyle{ I_1,\dots, I_K }[/math] such that the sum of the sizes of the items in each [math]\displaystyle{ I_j }[/math] is [math]\displaystyle{ B }[/math] or less?
Note that in the literature often an equivalent notation is used, where [math]\displaystyle{ B = 1 }[/math] and [math]\displaystyle{ s(i) \in \mathbb{Q} \cap (0,1] }[/math] for each [math]\displaystyle{ i \in I }[/math]. Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of [math]\displaystyle{ K }[/math]. A solution is optimal if it has minimal [math]\displaystyle{ K }[/math]. The [math]\displaystyle{ K }[/math]value for an optimal solution for a set of items [math]\displaystyle{ I }[/math] is denoted by [math]\displaystyle{ \mathrm{OPT}(I) }[/math] or just [math]\displaystyle{ \mathrm{OPT} }[/math] if the set of items is clear from the context.
A possible integer linear programming formulation of the problem is:
minimize [math]\displaystyle{ K = \sum_{j=1}^n y_j }[/math]  
subject to  [math]\displaystyle{ K \geq 1, }[/math]  
[math]\displaystyle{ \sum_{i \in I}^n s(i) x_{ij} \leq B y_j, }[/math]  [math]\displaystyle{ \forall j \in \{1,\ldots,n\} }[/math]  
[math]\displaystyle{ \sum_{j=1}^n x_{ij} = 1, }[/math]  [math]\displaystyle{ \forall i \in I }[/math]  
[math]\displaystyle{ y_j \in \{0,1\}, }[/math]  [math]\displaystyle{ \forall j \in \{1,\ldots,n\} }[/math]  
[math]\displaystyle{ x_{ij} \in \{0,1\}, }[/math]  [math]\displaystyle{ \forall i \in I \, \forall j \in \{1,\ldots,n\} }[/math] 
where [math]\displaystyle{ y_j = 1 }[/math] if bin [math]\displaystyle{ j }[/math] is used and [math]\displaystyle{ x_{ij} = 1 }[/math] if item [math]\displaystyle{ i }[/math] is put into bin [math]\displaystyle{ j }[/math].^{[8]}
Hardness of bin packing
The bin packing problem is strongly NPcomplete. This can be proven by reducing the strongly NPcomplete 3partition problem to bin packing.^{[7]}
Furthermore, there can be no approximation algorithm with absolute approximation ratio smaller than [math]\displaystyle{ 3/2 }[/math] unless [math]\displaystyle{ P = NP }[/math]. This can be proven by a reduction from the partition problem:^{[9]} given an instance of Partition where the sum of all input numbers is 2 T, construct an instance of binpacking in which the bin size is T. If there exists an equal partition of the inputs, then the optimal packing needs 2 bins; therefore, every algorithm with approximation ratio smaller than 3/2 must return less than 3 bins, which must be 2 bins. In contrast, if there is no equal partition of the inputs, then the optimal packing needs at least 3 bins.
On the other hand, bin packing is solvable in pseudopolynomial time for any fixed number of bins [math]\displaystyle{ K }[/math], and solvable in polynomial time for any fixed bin capacity [math]\displaystyle{ B }[/math].^{[7]}
Approximation algorithms for bin packing
To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items [math]\displaystyle{ L }[/math] the number [math]\displaystyle{ A(L) }[/math] denotes the number of bins used when algorithm [math]\displaystyle{ A }[/math] is applied to list [math]\displaystyle{ L }[/math], while [math]\displaystyle{ \mathrm{OPT}(L) }[/math] denotes the optimum number for this list. The absolute worstcase performance ratio [math]\displaystyle{ R_A }[/math] for an algorithm [math]\displaystyle{ A }[/math] is defined as
 [math]\displaystyle{ R_A \equiv \inf\{r \geq 1 : A(L)/\mathrm{OPT}(L) \leq r \text{ for all lists } L\}. }[/math]
On the other hand, the asymptotic worstcase ratio [math]\displaystyle{ R_A^{\infty} }[/math] is defined as
 [math]\displaystyle{ R_A^\infty \equiv \inf\{ r \geq 1: \exists N \gt 0, A(L)/\mathrm{OPT}(L) \leq r \text{ for all lists } L \text{ with } \mathrm{OPT}(L) \geq N\}. }[/math]
Equivalently, [math]\displaystyle{ R_A^{\infty} }[/math] is the smallest number such that, for some constant K, for all lists L:^{[4]}
 [math]\displaystyle{ A(L) \leq R^{\infty}_A \cdot OPT(L) + K }[/math].
Additionally, one can restrict the lists to those for which all items have a size of at most [math]\displaystyle{ \alpha }[/math]. For such lists, the bounded size performance ratios are denoted as [math]\displaystyle{ R_A(\text{size}\leq \alpha) }[/math] and [math]\displaystyle{ R_A^\infty(\text{size}\leq \alpha) }[/math].
Approximation algorithms for bin packing can be classified into two categories:
 Online heuristics, that consider the items in a given order and place them one by one inside the bins. These heuristics are also applicable to the online version of this problem.
 Offline heuristics, that modify the given list of items e.g. by sorting the items by size. These algorithms are no longer applicable to the online variant of this problem. However, they have an improved approximation guarantee while maintaining the advantage of their small timecomplexity. A subcategory of offline heuristics is asymptotic approximation schemes. These algorithms have an approximation guarantee of the form [math]\displaystyle{ (1+\varepsilon)\mathrm{OPT}(L) + C }[/math] for some constant that may depend on [math]\displaystyle{ 1/\varepsilon }[/math]. For an arbitrarily large [math]\displaystyle{ \mathrm{OPT}(L) }[/math] these algorithms get arbitrarily close to [math]\displaystyle{ \mathrm{OPT}(L) }[/math]. However, this comes at the cost of a (drastically) increased time complexity compared to the heuristical approaches.
Online heuristics
In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. A diverse set of offline and online heuristics for binpacking have been studied by David S. Johnson on his Ph.D. thesis.^{[10]}
Singleclass algorithms
There are many simple algorithms that use the following general scheme:
 For each item in the input list:
 If the item fits into one of the currently open bins, then put it in one of these bins;
 Otherwise, open a new bin and put the new item in it.
The algorithms differ in the criterion by which they choose the open bin for the new item in step 1 (see the linked pages for more information):
 Next Fit (NF) always keeps a single open bin. When the new item does not fit into it, it closes the current bin and opens a new bin. Its advantage is that it is a boundedspace algorithm, since it only needs to keep a single open bin in memory. Its disadvantage is that its asymptotic approximation ratio is 2. In particular, [math]\displaystyle{ NF(L) \leq 2 \cdot \mathrm{OPT}(L) 1 }[/math], and for each [math]\displaystyle{ N \in \mathbb{N} }[/math] there exists a list [math]\displaystyle{ L }[/math] such that [math]\displaystyle{ \mathrm{OPT}(L) = N }[/math] and [math]\displaystyle{ NF(L) = 2 \cdot \mathrm{OPT}(L) 2 }[/math].^{[10]} Its asymptotic approximation ratio can be somewhat improved based on the item sizes: [math]\displaystyle{ R_{NF}^\infty(\text{size}\leq\alpha) \leq 2 }[/math] for all [math]\displaystyle{ \alpha \geq 1/2 }[/math] and [math]\displaystyle{ R_{NF}^\infty(\text{size}\leq\alpha) \leq 1/(1\alpha) }[/math] for all [math]\displaystyle{ \alpha \leq 1/2 }[/math]. For each algorithm [math]\displaystyle{ A }[/math] that is an AnyFitalgorithm it holds that [math]\displaystyle{ R_{A}^{\infty}(\text{size}\leq\alpha)\leq R_{NF}^{\infty}(\text{size}\leq\alpha) }[/math].
 NextkFit (NkF) is a variant of NextFit, but instead of keeping only one bin open, the algorithm keeps the last [math]\displaystyle{ k }[/math] bins open and chooses the first bin in which the item fits. Therefore, it is called a kbounded space algorithm.^{[11]} For [math]\displaystyle{ k\geq 2 }[/math] the NkF delivers results that are improved compared to the results of NF, however, increasing [math]\displaystyle{ k }[/math] to constant values larger than [math]\displaystyle{ 2 }[/math] improves the algorithm no further in its worstcase behavior. If algorithm [math]\displaystyle{ A }[/math] is an AlmostAnyFitalgorithm and [math]\displaystyle{ m = \lfloor 1/\alpha\rfloor \geq 2 }[/math] then [math]\displaystyle{ R_{A}^{\infty}(\text{size}\leq\alpha)\leq R_{N2F}^{\infty}(\text{size}\leq\alpha) = 1+1/m }[/math].^{[10]}
 FirstFit (FF) keeps all bins open, in the order in which they were opened. It attempts to place each new item into the first bin in which it fits. Its approximation ratio is [math]\displaystyle{ FF(L) \leq \lfloor 1.7\mathrm{OPT}\rfloor }[/math], and there is a family of input lists [math]\displaystyle{ L }[/math] for which [math]\displaystyle{ FF(L) }[/math] matches this bound.^{[12]}
 BestFit (BF), too, keeps all bins open, but attempts to place each new item into the bin with the maximum load in which it fits. Its approximation ratio is identical to that of FF, that is: [math]\displaystyle{ BF(L) \leq \lfloor 1.7\mathrm{OPT}\rfloor }[/math], and there is a family of input lists [math]\displaystyle{ L }[/math] for which [math]\displaystyle{ BF(L) }[/math] matches this bound.^{[13]}
 WorstFit (WF) attempts to place each new item into the bin with the minimum load. It can behave as badly as NextFit, and will do so on the worstcase list for that [math]\displaystyle{ NF(L) = 2 \cdot \mathrm{OPT}(L) 2 }[/math]. Furthermore, it holds that [math]\displaystyle{ R_{WF}^{\infty}(\text{size}\leq \alpha) = R_{NF}^{\infty}(\text{size}\leq \alpha) }[/math]. Since WF is an AnyFitalgorithm, there exists an AnyFitalgorithm such that [math]\displaystyle{ R_{AF}^{\infty}(\alpha) = R_{NF}^{\infty}(\alpha) }[/math].^{[10]}
 Almost WorstFit (AWF) attempts to place each new item inside the second most empty open bin (or emptiest bin if there are two such bins). If it does not fit, it tries the most empty one. It has an asymptotic worstcase ratio of [math]\displaystyle{ 17/10 }[/math].^{[10]}
In order to generalize these results, Johnson introduced two classes of online heuristics called anyfit algorithm and almostanyfit algorithm:^{[4]}^{:470}
 In an AnyFit (AF) algorithm, if the current nonempty bins are B_{1},...,B_{j}, then the current item will not be packed into B_{j+1} unless it does not fit in any of B_{1},...,B_{j}. The FF, WF, BF and AWF algorithms satisfy this condition. Johnson proved that, for any AnyFit algorithm A and any [math]\displaystyle{ \alpha }[/math]:
[math]\displaystyle{ R_{FF}^{\infty}(\alpha) \leq R_{A}^{\infty}(\alpha) \leq R_{WF}^{\infty}(\alpha) }[/math].
 In an AlmostAnyFit (AAF) algorithm, if the current nonempty bins are B_{1},...,B_{j}, and of these bins, B_{k} is the unique bin with the smallest load, then the current item will not be packed into B_{k}, unless it does not fit into any of the bins to its left. The FF, BF and AWF algorithms satisfy this condition, but WF does not. Johnson proved that, for any AAF algorithm A and any [math]\displaystyle{ \alpha }[/math]:
[math]\displaystyle{ R_{A}^{\infty}(\alpha) = R_{FF}^{\infty}(\alpha) }[/math] In particular: [math]\displaystyle{ R_{A}^{\infty} = 1.7 }[/math].
Refined algorithms
Better approximation ratios are possible with heuristics that are not AnyFit. These heuristics usually keep several classes of open bins, devoted to items of different sizeranges (see the linked pages for more information):
 Refinedfirstfit bin packing (RFF) partitions the item sizes into four ranges: [math]\displaystyle{ (1/2,1] }[/math], [math]\displaystyle{ (2/5,1/2] }[/math], [math]\displaystyle{ (1/3,2/5] }[/math], and [math]\displaystyle{ (0,1/3] }[/math]. Similarly, the bins are categorized into four classes. The next item [math]\displaystyle{ i \in L }[/math] is first assigned to its corresponding class. Inside that class, it is assigned to a bin using firstfit. Note that this algorithm is not an AnyFit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin. This algorithm was first presented by Andrew ChiChih Yao,^{[14]} who proved that it has an approximation guarantee of [math]\displaystyle{ RFF(L) \leq (5/3) \cdot \mathrm{OPT}(L) +5 }[/math] and presented a family of lists [math]\displaystyle{ L_k }[/math] with [math]\displaystyle{ RFF(L_k) = (5/3)\mathrm{OPT}(L_k) +1/3 }[/math] for [math]\displaystyle{ \mathrm{OPT}(L) = 6k+1 }[/math].
 Harmonick partitions the interval of sizes [math]\displaystyle{ (0,1] }[/math] based on a Harmonic progression into [math]\displaystyle{ k1 }[/math] pieces [math]\displaystyle{ I_j := (1/(j+1),1/j] }[/math] for [math]\displaystyle{ 1\leq j \lt k }[/math] and [math]\displaystyle{ I_k := (0,1/k] }[/math] such that [math]\displaystyle{ \bigcup_{j=1}^k I_j = (0,1] }[/math]. This algorithm was first described by Lee and Lee.^{[15]} It has a time complexity of [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math] and at each step, there are at most [math]\displaystyle{ k }[/math] open bins that can be potentially used to place items, i.e., it is a [math]\displaystyle{ k }[/math]bounded space algorithm. For [math]\displaystyle{ k \rightarrow \infty }[/math], its approximation ratio satisfies [math]\displaystyle{ R_{Hk}^{\infty} \approx 1.6910 }[/math], and it is asymptotically tight.
 Refinedharmonic combines ideas from Harmonick with ideas from RefinedFirstFit. It places the items larger than [math]\displaystyle{ 1/3 }[/math] similar as in RefinedFirstFit, while the smaller items are placed using Harmonick. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than [math]\displaystyle{ 1/2 }[/math]. This algorithm was first described by Lee and Lee.^{[15]} They proved that for [math]\displaystyle{ k = 20 }[/math] it holds that [math]\displaystyle{ R^\infty_{RH} \leq 373/228 }[/math].
General lower bounds for online algorithms
Yao^{[14]} proved 1980 that there can be no online algorithm with asymptotic competitive ratio smaller than [math]\displaystyle{ 3/2 }[/math]. Brown^{[16]} and Liang^{[17]} improved this bound to [math]\displaystyle{ 1.53635 }[/math]. Afterward, this bound was improved to [math]\displaystyle{ 1.54014 }[/math] by Vliet.^{[18]} In 2012, this lower bound was again improved by Békési and Galambos^{[19]} to [math]\displaystyle{ 248/161 \approx 1.54037 }[/math].
Comparison table
Algorithm  Approximation guarantee  Worst case list [math]\displaystyle{ L }[/math]  Timecomplexity 

Nextfit (NF)  [math]\displaystyle{ NF(L) \leq 2 \cdot \mathrm{OPT}(L) 1 }[/math]^{[10]}  [math]\displaystyle{ NF(L) = 2 \cdot \mathrm{OPT}(L) 2 }[/math]^{[10]}  [math]\displaystyle{ \mathcal{O}(L) }[/math] 
Firstfit (FF)  [math]\displaystyle{ FF(L) \leq \lfloor 1.7\mathrm{OPT}(L)\rfloor }[/math]^{[12]}  [math]\displaystyle{ FF(L) = \lfloor 1.7\mathrm{OPT}(L)\rfloor }[/math]^{[12]}  [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math]^{[10]} 
Bestfit (BF)  [math]\displaystyle{ BF(L) \leq \lfloor 1.7\mathrm{OPT}(L) \rfloor }[/math]^{[13]}  [math]\displaystyle{ BF(L) =\lfloor 1.7\mathrm{OPT}(L) \rfloor }[/math]^{[13]}  [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math]^{[10]} 
WorstFit (WF)  [math]\displaystyle{ WF(L) \leq 2 \cdot \mathrm{OPT}(L) 1 }[/math]^{[10]}  [math]\displaystyle{ WF(L) = 2 \cdot \mathrm{OPT}(L) 2 }[/math]^{[10]}  [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math]^{[10]} 
AlmostWorstFit (AWF)  [math]\displaystyle{ R^{\infty}_{AWF} \leq 17/10 }[/math]^{[10]}  [math]\displaystyle{ R^{\infty}_{AWF} = 17/10 }[/math]^{[10]}  [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math]^{[10]} 
RefinedFirstFit (RFF)  [math]\displaystyle{ RFF(L) \leq (5/3) \cdot \mathrm{OPT}(L) +5 }[/math]^{[14]}  [math]\displaystyle{ RFF(L)=(5/3)\mathrm{OPT}(L) +1/3 }[/math] (for [math]\displaystyle{ \mathrm{OPT}(L) = 6k+1 }[/math])^{[14]}  [math]\displaystyle{ \mathcal{O}(L\log(L)) }[/math]^{[14]} 
Harmonick (Hk)  [math]\displaystyle{ R_{Hk}^{\infty} \leq 1.69103 }[/math] for [math]\displaystyle{ k \rightarrow \infty }[/math]^{[15]}  [math]\displaystyle{ R_{Hk}^{\infty} \geq 1.69103 }[/math]^{[15]}  [math]\displaystyle{ \mathcal{O}(L\log(L) }[/math]^{[15]} 
Refined Harmonic (RH)  [math]\displaystyle{ R_{RH}^{\infty} \leq 373/228 \approx 1.63597 }[/math]^{[15]}  [math]\displaystyle{ \mathcal{O}(L) }[/math]^{[15]}  
Modified Harmonic (MH)  [math]\displaystyle{ R_{MH}^{\infty} \leq 538/33 \approx 1.61562 }[/math]^{[20]}  
Modified Harmonic 2 (MH2)  [math]\displaystyle{ R_{MH2}^{\infty} \leq 239091/148304 \approx 1.61217 }[/math]^{[20]}  
Harmonic + 1 (H+1)  [math]\displaystyle{ R_{H+1}^\infty \geq 1.59217 }[/math]^{[21]}  
Harmonic ++ (H++)  [math]\displaystyle{ R_{H++}^\infty \leq 1.58889 }[/math]^{[21]}  [math]\displaystyle{ R_{H++}^{\infty} \geq 1.58333 }[/math]^{[21]} 
Offline algorithms
In the offline version of bin packing, the algorithm can see all the items before starting to place them into bins. This allows to attain improved approximation ratios.
Multiplicative approximation
The simplest technique used by offline algorithms is:
 Ordering the input list by descending size;
 Run an online algorithm on the ordered list.
Johnson^{[10]} proved that any AnyFit algorithm A that runs on a list ordered by descending size has an asymptotic approximation ratio of
[math]\displaystyle{ 1.22 \approx \frac{11}{9} \leq R^{\infty}_A \leq \frac{5}{4} = 1.25 }[/math].
Some algorithms in this family are (see the linked pages for more information):
 Firstfitdecreasing (FFD)  orders the items by descending size, then calls FirstFit. Its approximation ratio is [math]\displaystyle{ FFD(I) = 11/9 \mathrm{OPT}(I) +6/9 }[/math], and this is tight.^{[22]}
 Nextfitdecreasing (NFD)  orders the items by descending size, then calls NextFit. Its approximate ratio is slightly less than 1.7 in the worst case.^{[23]} It has also been analyzed probabilistically.^{[24]} NextFit packs a list and its inverse into the same number of bins. Therefore, NextFitIncreasing has the same performance as NextFitDecreasing.^{[25]}
 Modified firstfitdecreasing (MFFD)^{[26]}  improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Its approximation guarantee is [math]\displaystyle{ MFFD(I) \leq (71/60)\mathrm{OPT}(I) + 1 }[/math].^{[27]}
Fernandez de la Vega and Lueker^{[28]} presented a PTAS for bin packing. For every [math]\displaystyle{ \varepsilon\gt 0 }[/math], their algorithm finds a solution with size at most [math]\displaystyle{ (1+\varepsilon)\mathrm{OPT} + 1 }[/math] and runs in time [math]\displaystyle{ \mathcal{O}(n\log(1/\varepsilon)) + \mathcal{O}_{\varepsilon}(1) }[/math], where [math]\displaystyle{ \mathcal{O}_{\varepsilon}(1) }[/math] denotes a function only dependent on [math]\displaystyle{ 1/\varepsilon }[/math]. For this algorithm, they invented the method of adaptive input rounding: the input numbers are grouped and rounded up to the value of the maximum in each group. This yields an instance with a small number of different sizes, which can be solved exactly using the configuration linear program.^{[29]}
Additive approximation
The KarmarkarKarp bin packing algorithm finds a solution with size at most [math]\displaystyle{ \mathrm{OPT} + \mathcal{O}(\log^2(OPT)) }[/math], and runs in time polynomial in [math]\displaystyle{ n }[/math] (the polynomial has a high degree  at least 8).
Rothvoss^{[30]} presented an algorithm that generates a solution with at most [math]\displaystyle{ \mathrm{OPT} + O(\log(\mathrm{OPT})\cdot \log\log(\mathrm{OPT})) }[/math] bins.
Hoberg and Rothvoss^{[31]} improved this algorithm to generate a solution with at most [math]\displaystyle{ \mathrm{OPT} + O(\log(\mathrm{OPT})) }[/math] bins. The algorithm is randomized, and its runningtime is polynomial in [math]\displaystyle{ n }[/math].
Comparison table
Algorithm  Approximation guarantee  Worst case instance 

Firstfitdecreasing (FFD)  [math]\displaystyle{ FFD(I) \leq 11/9 \mathrm{OPT}(I) +6/9 }[/math]^{[22]}  [math]\displaystyle{ FFD(I) = 11/9 \mathrm{OPT}(I) +6/9 }[/math]^{[22]} 
Modifiedfirstfitdecreasing (MFFD)  [math]\displaystyle{ MFFD(I) \leq (71/60)\mathrm{OPT}(I) + 1 }[/math]^{[27]}  [math]\displaystyle{ R_{MFFD}^\infty \geq 71/60 }[/math]^{[26]} 
Karmarkar and Karp^{[32]}  [math]\displaystyle{ KK(I) \leq OPT(I) + O(\log^2{OPT(I)}) }[/math]  
Rothvoss^{[30]}  [math]\displaystyle{ HB(I) \leq OPT(I) + O(\log{OPT(I)}\log\log{OPT(I)}) }[/math]  
Hoberg and Rothvoss^{[31]}  [math]\displaystyle{ HB(I) \leq OPT(I) + O(\log{OPT(I)}) }[/math] 
Exact algorithms
Martello and Toth^{[33]} developed an exact algorithm for the 1dimensional binpacking problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002^{[34]} and later improved.^{[35]}
A further improvement was presented by Schreiber and Korf in 2013.^{[36]} The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on nontrivial problems with 100 items, and outperforms the BCP (branchandcutandprice) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like number of items, optimal number of bins, unused space in the optimal solution and value precision.
Small number of different sizes
A special case of bin packing is when there is a small number d of different item sizes. There can be many different items of each size. This case is also called highmultiplicity bin packing, and It admits more efficient algorithms than the general problem.
Cardinality constraints on the bins
There is a variant of bin packing in which there are cardinality constraints on the bins: each bin can contain at most k items, for some fixed integer k.
 Krause, Shen and Schwetman^{[37]} introduce this problem as a variant of optimal job scheduling: a computer has some k processors. There are some n jobs that take unit time (1), but have different memory requirements. Each timeunit is considered a single bin. The goal is to use as few bins (=time units) as possible, while ensuring that in each bin, at most k jobs run. They present several heuristic algorithms that find a solution with at most [math]\displaystyle{ 2 OPT }[/math] bins.
 Kellerer and Pferschy^{[38]} present an algorithm with runtime [math]\displaystyle{ O(n^2 \log{n}) }[/math], that finds a solution with at most [math]\displaystyle{ \lceil\frac{3}{2}\mathrm{OPT}\rceil }[/math] bins. Their algorithm performs a binary search for OPT. For every searched value m, it tries to pack the items into 3m/2 bins.
Nonadditive functions
There are various ways to extend the binpacking model to more general cost and load functions:
 Anily, Bramel and SimchiLevi^{[39]} study a setting where the cost of a bin is a concave function of the number of items in the bin. The objective is to minimize the total cost rather than the number of bins. They show that nextfitincreasing bin packing attains an absolute worstcase approximation ratio of at most 7/4, and an asymptotic worstcase ratio 1.691 for any concave and monotone cost function.
 Cohen, Keller, Mirrokni and Zadimoghaddam^{[40]} study a setting where the size of the items is not known in advance, but it is a random variable. This is particularly common in cloud computing environments. While there is an upper bound on the amount of resources a certain user needs, most users use much less than the capacity. Therefore, the cloud manager may gain a lot by slight overcommitment. This induces a variant of bin packing with chance constraints: the probability that the sum of sizes in each bin is at most B should be at least p, where p is a fixed constant (standard bin packing corresponds to p=1). They show that, under mild assumptions, this problem is equivalent to a submodular bin packing problem, in which the "load" in each bin is not equal to the sum of items, but to a certain submodular function of it.
Related problems
In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).
In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).
In the inverse bin packing problem,^{[41]} both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.
In the maximum resource bin packing problem,^{[42]} the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.
In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.
In the fair indivisible chore allocation problem (a variant of fair item allocation), the items represents chores, and there are different people each of whom attributes a different difficultyvalue to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficultyvalue (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too.^{[43]}
In the guillotine cutting problem, both the items and the "bins" are twodimensional rectangles rather than onedimensional numbers, and the items have to be cut from the bin using endtoend cuts.
In the selfish bin packing problem, each item is a player who wants to minimize its cost.^{[44]}
There is also a variant of bin packing in which the cost that should be minimized is not the number of bins, but rather a certain concave function of the number of items in each bin.^{[39]}
Other variants are twodimensional bin packing,^{[45]} threedimensional bin packing,^{[46]} bin packing with delivery,^{[47]}
Resources
 BPPLIB  a library of surveys, codes, benchmarks, generators, solvers, and bibliography.
Implementations
 Online: visualization of heuristics for 1D and 2D bin packing
 Python: The prtpy package contains code for various numberpartitioning, binpacking and bincovering algorithms. The binpacking package contains greedy algorithms for solving two typical bin packing problems.^{[48]}
 C++: The binpacking package contains various greedy algorithms as well as test data. The ORtools package contains bin packing algorithms in C++, with wrappers in Python, C# and Java.
 C: Implementation of 7 classic approximate bin packing algorithms in C with results and images
 PHP: PHP Class to pack files without exceeding a given size limit
 Haskell: An implementation of several bin packing heuristics, including FFD and MFFD.
 C: Fpart : opensource commandline tool to pack files (C, BSDlicensed)
 C#: Bin Packing and Cutting Stock Solver
 Java: caparf  Cutting And Packing Algorithms Research Framework, including a number of bin packing algorithms and test data.
 API: Optioryx a fast online 3D bin packer, taking into account transport cost rates.
References
 ↑ Martello, Silvano; Toth, Paolo (1990), "Binpacking problem", Knapsack Problems: Algorithms and Computer Implementations, Chichester, UK: John Wiley and Sons, ISBN 0471924202, http://www.or.deis.unibo.it/kp/Chapter8.pdf
 ↑ Korte, Bernhard; Vygen, Jens (2006). "BinPacking". Combinatorial Optimization: Theory and Algorithms. Algorithms and Combinatorics 21. Springer. pp. 426–441. doi:10.1007/3540292977_18. ISBN 9783540256847. https://books.google.com/books?id=UnYwgPltSjwC&q=BinPacking&pg=PA449.
 ↑ Barrington, David Mix (2006). "Bin Packing". https://people.cs.umass.edu/~barring/cs311/disc/11.html.
 ↑ ^{4.0} ^{4.1} ^{4.2} Coffman Jr., Edward G.; Csirik, János; Galambos, Gábor; Martello, Silvano; Vigo, Daniele (2013), Pardalos, Panos M.; Du, DingZhu; Graham, Ronald L., eds., "Bin Packing Approximation Algorithms: Survey and Classification" (in en), Handbook of Combinatorial Optimization (New York, NY: Springer): pp. 455–531, doi:10.1007/9781441979971_35, ISBN 9781441979971, https://doi.org/10.1007/9781441979971_35, retrieved 20210808
 ↑ Lewis, R. (2009), "A GeneralPurpose HillClimbing Method for Order Independent Minimum Grouping Problems: A Case Study in Graph Colouring and Bin Packing", Computers and Operations Research 36 (7): 2295–2310, doi:10.1016/j.cor.2008.09.004, http://orca.cf.ac.uk/11334/1/Lewis%2C_R_General_Purpose_Hill_Climbing.pdf
 ↑ Sindelar, Michael; Sitaraman, Ramesh; Shenoy, Prashant (2011), "SharingAware Algorithms for Virtual Machine Colocation", Proceedings of 23rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), San Jose, CA, June 2011: 367–378, https://storage.googleapis.com/pubtoolspublicpublicationdata/pdf/37147.pdf
 ↑ ^{7.0} ^{7.1} ^{7.2} Garey, M. R.; Johnson, D. S. (1979). Victor Klee. ed. Computers and Intractability: A Guide to the Theory of NPCompleteness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co.. pp. x+338. ISBN 0716710455.
 ↑ Martello & Toth 1990, p. 221
 ↑ Vazirani, Vijay V. (14 March 2013). Approximation Algorithms. Springer Berlin Heidelberg. pp. 74. ISBN 9783662045657.
 ↑ ^{10.00} ^{10.01} ^{10.02} ^{10.03} ^{10.04} ^{10.05} ^{10.06} ^{10.07} ^{10.08} ^{10.09} ^{10.10} ^{10.11} ^{10.12} ^{10.13} ^{10.14} ^{10.15} Johnson, David S (1973). "Nearoptimal bin packing algorithms". Massachusetts Institute of Technology. https://dspace.mit.edu/bitstream/handle/1721.1/57819/17595570MIT.pdf?sequence=2.
 ↑ Gonzalez, Teofilo F. (23 May 2018). Handbook of approximation algorithms and metaheuristics. Volume 2 Contemporary and emerging applications. ISBN 9781498770156.
 ↑ ^{12.0} ^{12.1} ^{12.2} Dósa, György; Sgall, Jiri (2013). "First Fit bin packing: A tight analysis". 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013) (Schloss Dagstuhl–LeibnizZentrum für Informatik) 20: 538–549. doi:10.4230/LIPIcs.STACS.2013.538. http://drops.dagstuhl.de/opus/volltexte/2013/3963.
 ↑ ^{13.0} ^{13.1} ^{13.2} György, Dósa; Sgall, Jirí (2014). "Optimal Analysis of Best Fit Bin Packing". {Automata, Languages, and Programming – 41st International Colloquium (ICALP)}. Lecture Notes in Computer Science 8572: 429–441. doi:10.1007/9783662439487_36. ISBN 9783662439470.
 ↑ ^{14.0} ^{14.1} ^{14.2} ^{14.3} ^{14.4} Yao, Andrew ChiChih (April 1980). "New Algorithms for Bin Packing". Journal of the ACM 27 (2): 207–227. doi:10.1145/322186.322187.
 ↑ ^{15.0} ^{15.1} ^{15.2} ^{15.3} ^{15.4} ^{15.5} ^{15.6} Lee, C. C.; Lee, D. T. (July 1985). "A simple online binpacking algorithm". Journal of the ACM 32 (3): 562–572. doi:10.1145/3828.3833.
 ↑ Donna J, Brown (1979). "A Lower Bound for OnLine OneDimensional Bin Packing Algorithms.". Technical Rept.. https://apps.dtic.mil/dtic/tr/fulltext/u2/a085315.pdf.
 ↑ Liang, Frank M. (1980). "A lower bound for online bin packing". Information Processing Letters 10 (2): 76–79. doi:10.1016/S00200190(80)900770.
 ↑ van Vliet, André (1992). "An improved lower bound for online bin packing algorithms". Information Processing Letters 43 (5): 277–284. doi:10.1016/00200190(92)90223I.
 ↑ Balogh, János; Békési, József; Galambos, Gábor (July 2012). "New lower bounds for certain classes of bin packing algorithms". Theoretical Computer Science 440–441: 1–13. doi:10.1016/j.tcs.2012.04.017.
 ↑ ^{20.0} ^{20.1} Ramanan, Prakash; Brown, Donna J; Lee, C.C; Lee, D.T (September 1989). "Online bin packing in linear time". Journal of Algorithms 10 (3): 305–326. doi:10.1016/01966774(89)90031X.
 ↑ ^{21.0} ^{21.1} ^{21.2} Seiden, Steven S. (2002). "On the online bin packing problem". Journal of the ACM 49 (5): 640–671. doi:10.1145/585265.585269.
 ↑ ^{22.0} ^{22.1} ^{22.2} Dósa, György (2007). "The Tight Bound of First Fit Decreasing BinPacking Algorithm Is FFD(I) ≤ 11/9OPT(I) + 6/9". Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. ESCAPE. doi:10.1007/9783540744504_1.
 ↑ Baker, B. S.; Coffman, Jr., E. G. (19810601). "A Tight Asymptotic Bound for NextFitDecreasing BinPacking". SIAM Journal on Algebraic and Discrete Methods 2 (2): 147–152. doi:10.1137/0602019. ISSN 01965212. https://epubs.siam.org/doi/abs/10.1137/0602019.
 ↑ Csirik, J.; Galambos, G.; Frenk, J.B.G.; Frieze, A.M.; Rinnooy Kan, A.H.G. (19861101). "A probabilistic analysis of the next fit decreasing bin packing heuristic" (in en). Operations Research Letters 5 (5): 233–236. doi:10.1016/01676377(86)900131. ISSN 01676377. https://www.sciencedirect.com/science/article/abs/pii/0167637786900131.
 ↑ Fisher, David C. (19881201). "Nextfit packs a list and its reverse into the same number of bins" (in en). Operations Research Letters 7 (6): 291–293. doi:10.1016/01676377(88)900600. ISSN 01676377. https://www.sciencedirect.com/science/article/abs/pii/0167637788900600.
 ↑ ^{26.0} ^{26.1} Johnson, David S; Garey, Michael R (October 1985). "A 7160 theorem for bin packing". Journal of Complexity 1 (1): 65–106. doi:10.1016/0885064X(85)900226.
 ↑ ^{27.0} ^{27.1} Yue, Minyi; Zhang, Lei (July 1995). "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD binpacking algorithm". Acta Mathematicae Applicatae Sinica 11 (3): 318–330. doi:10.1007/BF02011198.
 ↑ Fernandez de la Vega, W.; Lueker, G. S. (1981). "Bin packing can be solved within 1 + ε in linear time" (in en). Combinatorica 1 (4): 349–355. doi:10.1007/BF02579456. ISSN 14396912.
 ↑ Claire Mathieu. "Approximation Algorithms Part I, Week 3: bin packing". https://www.coursera.org/learn/approximationalgorithmspart1/home/week/3.
 ↑ ^{30.0} ^{30.1} Rothvoß, T. (20131001). "Approximating Bin Packing within O(log OPT * Log Log OPT) Bins". 2013 IEEE 54th Annual Symposium on Foundations of Computer Science: 20–29. doi:10.1109/FOCS.2013.11. ISBN 9780769551357. https://ieeexplore.ieee.org/document/6686137.
 ↑ ^{31.0} ^{31.1} Hoberg, Rebecca; Rothvoss, Thomas (20170101), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACMSIAM Symposium on Discrete Algorithms, Proceedings (Society for Industrial and Applied Mathematics): pp. 2616–2625, doi:10.1137/1.9781611974782.172, ISBN 9781611974782, https://epubs.siam.org/doi/abs/10.1137/1.9781611974782.172, retrieved 20210210
 ↑ Karmarkar, Narendra; Karp, Richard M. (November 1982). "An efficient approximation scheme for the onedimensional binpacking problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. https://ieeexplore.ieee.org/document/4568405/references#references.
 ↑ Martello & Toth 1990, pp. 237–240.
 ↑ Korf, Richard E. (2002). "A new algorithm for optimal bin packing.". AAAI02. http://www.aaai.org/Papers/AAAI/2002/AAAI02110.pdf.
 ↑ R. E. Korf (2003), An improved algorithm for optimal bin packing. Proceedings of the International Joint Conference on Artificial Intelligence, (pp. 1252–1258)
 ↑ Schreiber, Ethan L.; Korf, Richard E. (2013), "Improved Bin Completion for Optimal Bin Packing and Number Partitioning", Proceedings of the TwentyThird International Joint Conference on Artificial Intelligence, IJCAI '13, Beijing, China: AAAI Press, pp. 651–658, ISBN 9781577356332, https://www.ijcai.org/Proceedings/13/Papers/103.pdf
 ↑ Krause, K. L.; Shen, V. Y.; Schwetman, H. D. (19751001). "Analysis of Several TaskScheduling Algorithms for a Model of Multiprogramming Computer Systems". Journal of the ACM 22 (4): 522–550. doi:10.1145/321906.321917. ISSN 00045411. https://doi.org/10.1145/321906.321917.
 ↑ Kellerer, H.; Pferschy, U. (19990101). "Cardinality constrained bin‐packing problems" (in en). Annals of Operations Research 92: 335–348. doi:10.1023/A:1018947117526. ISSN 15729338. https://doi.org/10.1023/A:1018947117526.
 ↑ ^{39.0} ^{39.1} Anily, Shoshana; Bramel, Julien; SimchiLevi, David (19940401). "WorstCase Analysis of Heuristics for the Bin Packing Problem with General Cost Structures". Operations Research 42 (2): 287–298. doi:10.1287/opre.42.2.287. ISSN 0030364X. https://pubsonline.informs.org/doi/abs/10.1287/opre.42.2.287.
 ↑ Cohen, Maxime C.; Keller, Philipp W.; Mirrokni, Vahab; Zadimoghaddam, Morteza (20190701). "Overcommitment in Cloud Services: Bin Packing with Chance Constraints". Management Science 65 (7): 3255–3271. doi:10.1287/mnsc.2018.3091. ISSN 00251909. https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2018.3091.
 ↑ Chung, Yerim; Park, MyoungJu (20150101). "Notes on inverse binpacking problems" (in en). Information Processing Letters 115 (1): 60–68. doi:10.1016/j.ipl.2014.09.005. ISSN 00200190. http://www.sciencedirect.com/science/article/pii/S002001901400180X.
 ↑ Boyar, Joan; Epstein, Leah; Favrholdt, Lene M.; Kohrt, Jens S.; Larsen, Kim S.; Pedersen, Morten M.; Wøhlk, Sanne (20061011). "The maximum resource bin packing problem" (in en). Theoretical Computer Science 362 (1): 127–139. doi:10.1016/j.tcs.2006.06.001. ISSN 03043975. http://www.sciencedirect.com/science/article/pii/S0304397506003483.
 ↑ Huang, Xin; Lu, Pinyan (20201110). "An Algorithmic Framework for Approximating Maximin Share Allocation of Chores". arXiv:1907.04505 [cs.GT].
 ↑ Ma, Ruixin; Dósa, György; Han, Xin; Ting, HingFung; Ye, Deshi; Zhang, Yong (20130801). "A note on a selfish bin packing problem". Journal of Global Optimization 56 (4): 1457–1462. doi:10.1007/s1089801298569. ISSN 09255001. https://doi.org/10.1007/s1089801298569.
 ↑ Lodi A., Martello S., Monaci, M., Vigo, D. (2010) "TwoDimensional Bin Packing Problems". In V.Th. Paschos (Ed.), Paradigms of Combinatorial Optimization, Wiley/ISTE, pp. 107–129
 ↑ Optimizing ThreeDimensional Bin Packing Through Simulation
 ↑ Benkő A., Dósa G., Tuza Z. (2010) "Bin Packing/Covering with Delivery, Solved with the Evolution of Algorithms," Proceedings 2010 IEEE 5th International Conference on BioInspired Computing: Theories and Applications, BICTA 2010, art. no. 5645312, pp. 298–302.
 ↑ Vaccaro, Alessio (20201113). "🧱 4 Steps to Easily Allocate Resources with Python & Bin Packing" (in en). https://towardsdatascience.com/4stepstoeasilyallocateresourceswithpythonbinpacking5933fb8e53a9.
Original source: https://en.wikipedia.org/wiki/Bin packing problem.
Read more 