Design effect

From HandWiki

In survey methodology, the design effect (generally denoted as [math]\displaystyle{ D_{\mathrm{eff}} }[/math] or [math]\displaystyle{ D_{\mathrm{eft}}^2 }[/math]) is a measure of the expected impact of a sampling design on the variance of an estimator for some parameter. It is calculated as the ratio of the variance of an estimator based on a sample from an (often) complex sampling design, to the variance of an alternative estimator based on a simple random sample (SRS) of the same number of elements.[1]:258 The [math]\displaystyle{ D_{\mathrm{eff}} }[/math] (be it estimated, or known a priori) can be used to adjust the variance of an estimator in cases where the sample is not drawn using simple random sampling. It may also be useful in sample size calculations and for quantifying the representativeness of a sample. The term "design effect" was coined by Leslie Kish in 1965.

The design effect is a positive real number that indicates an inflation ([math]\displaystyle{ D_{\mathrm{eff}}\gt 1 }[/math]), or deflation ([math]\displaystyle{ D_{\mathrm{eff}}\lt 1 }[/math]) in the variance of an estimator for some parameter, that is due to the study not using SRS (with [math]\displaystyle{ D_{\mathrm{eff}}=1 }[/math], when the variances are identical).[2]:53,54

Some potential complex sampling that could introduce [math]\displaystyle{ D_{\mathrm{eff}} }[/math] that is different than 1 include: cluster sampling (such as when there is correlation between observations), stratified sampling, cluster randomized controlled trial, disproportional (unequal probability) sample, non-coverage, non-response, statistical adjustments of the data, etc..

[math]\displaystyle{ D_{\mathrm{eff}} }[/math] can be used in sample size calculations, quantifying the representative of a sample (to a target population), as well as for adjusting (often inflating) the variance of some estimator (in cases when we can calculate that estimator's variance assuming SRS).[3]

The term "Design effect" was coined by Leslie Kish in 1965.[1]:88,258 Ever since, many calculations (and estimators) have been proposed, in the literature, for describing the effect of known sampling design on the increase/decrease in the variance of estimators of interest. In general, the design effect varies between statistics of interests, such as the total or ratio mean; it also matters if the design (e.g.: selection probabilities) are correlated with the outcome of interest. And lastly, it is influenced by the distribution of the outcome itself. All of these should be considered when estimating and using design effect in practice.[4]:13

Definitions

[math]\displaystyle{ D_{\mathrm{eff}} }[/math]

The design effect, commonly denoted by [math]\displaystyle{ D_{\mathrm{eff}} }[/math] (at times with different subscripts), is the ratio of two theoretical variances for estimators of some parameter ([math]\displaystyle{ \theta }[/math]):[1][5]

  • In the numerator is the actual variance for an estimator of some parameter ([math]\displaystyle{ \hat \theta_w }[/math]) in a given sampling design [math]\displaystyle{ p }[/math];
  • In the denominator is the variance assuming the same sample size, but if the sample was obtained using the estimator we would use for a simple random sampling without replacement ([math]\displaystyle{ \hat \theta_{srswor} }[/math]).

So that:

[math]\displaystyle{ D_{\mathrm{eff},p}(\hat \theta) = \frac{\mathrm{var}(\hat \theta_w)}{\mathrm{var}(\hat \theta_{\mathrm{srswor}})} }[/math]

Put differently, [math]\displaystyle{ D_{\mathrm{eff}} }[/math] is by how much more the variance had increased (or decreased, in some cases) because our sample was drawn and adjusted to a specific sampling design (e.g.: using weights, or other measures) as it would be if instead the sample was from a simple random sampling (without replacement). There are many ways of calculation [math]\displaystyle{ D_{\mathrm{eff}} }[/math], depending on the parameter of interest (E.g.: population total, population mean, quantiles, ratio of quantities etc.), the estimator used, and the sampling design (e.g.: clustered sampling, stratified sampling, post-stratification, multi-stage sampling, etc.).

For estimating the population mean, the [math]\displaystyle{ D_{\mathrm{eff}} }[/math] (for some sampling design p) is:[4]:4[2]:54

[math]\displaystyle{ D_{\mathrm{eff},p} = \frac{\mathrm{var}_p(\bar y_p)}{(1-f)S^2_y / n} }[/math]

Where [math]\displaystyle{ n }[/math] is the sample size, f is the fraction of the sample from the population [math]\displaystyle{ (n/N) }[/math], [math]\displaystyle{ 1-f }[/math] is the (squared) finite population correction (FPC), and [math]\displaystyle{ S^2_y = }[/math] is the unbiassed sample variance.

The estimates of unit variance (or element variance) is when multiplying [math]\displaystyle{ D_{\mathrm{eff}} }[/math] by the element's variance, so to incorporate all the complexities of the sample design.[1]:259

Notice how the definition of [math]\displaystyle{ D_{\mathrm{eff}} }[/math] is based on parameters of the population that we often do not know (i.e.: the variances of estimators under two different sampling designs). The process of estimating [math]\displaystyle{ D_{\mathrm{eff}} }[/math] for specific designs will be described in the following section.[6]:98

A general formula for the (theoretical) design effect of estimating a total (not the mean), for some design, is given in Cochran 1977.[2]:54

[math]\displaystyle{ D_{\mathrm{eft}} }[/math]

A related quantity to [math]\displaystyle{ D_{\mathrm{eff}} }[/math], proposed by Kish in 1995, is called [math]\displaystyle{ D_{\mathrm{eft}} }[/math] (Design Effect Factor).[7]:56[4] It is defined on the square root of the variance ratios, and also the denominator uses a simple random sample with replacement (srswr), instead of without replacement (srswor):

[math]\displaystyle{ D_{\mathrm{eft}} = \sqrt{\frac{\mathrm{var}(\hat \theta_w)}{\mathrm{var}(\hat \theta_{\mathrm{srswr}})}} }[/math]

In this later definition (proposed in 1995, vs 1965) it was argued that srs "without replacement" (with its positive effect on the variance) should be captured in the definition of the design effect, since it is part of the sampling design. It is also more directly related to the use in inference (since we often use +Z*DE*SE, not +Z*DE*VAR when creating confidence intervals). Also since the finite population correction (FPC) is also harder to compute in some situations. But for many cases when the population is very large, Deft is (almost) the square root of [math]\displaystyle{ D_{\mathrm{eff}} }[/math] ([math]\displaystyle{ D_{\mathrm{eft}} \approx \sqrt{D_{\mathrm{eff}}} }[/math]).

The original intention for [math]\displaystyle{ D_{\mathrm{eft}} }[/math] was to have it "express the effects of sample design beyond the elemental variability [math]\displaystyle{ \frac{S^2_m}{m} }[/math], removing both the unit of measurement and sample size as nuisance parameters", this is done in order to make the design effect generalizable (relevant for) many statistics and variables within the same survey (and even between surveys).[7]:55 However, followup works have shown that the calculation of design effect, for parameters such as a population total or mean, has dependence on the variability of the outcome measure, which limits the original aspiration of Kish for this measure. However, this statement may loosely (i.e.: under some conditions) be true for the weighted mean.[4]:5

Effective sample size

The effective sample size, also defined by Kish in 1965, is the original sample size divided by the design effect.[1]:162,259[8]:190,192 This quantity reflects what would be the sample size that is needed to achieve the current variance of the estimator (for some parameter) with the existing design, if the sample design (and its relevant parameter estimator) were based on a simple random sample.[9]

Namely:

[math]\displaystyle{ n_{\text{eff}} = \frac{n}{D_{eff}} }[/math]

Put differently, it says how many responses we are left with when using an estimator that correctly adjusts for the design effect of the sampling design. For example, using the weighted mean with inverse probability weighting, instead of the simple mean.

It is also possible to get the effective sample size ratio by taking the inverse of [math]\displaystyle{ D_{\mathrm{eff}} }[/math] (i.e.: [math]\displaystyle{ \frac{n_{eff}}{n} = \frac{1}{D_{eff}} }[/math]).

When using Kish's design effect for unequal weights, you may use the following simplified formula for "Kish's Effective Sample Size"[10][1]:162,259

[math]\displaystyle{ n_{\text{eff}} = \frac{n}{D_\text{eff}} = \frac{n}{\frac{\overline{w^2}}{\overline{w}^2}} = \frac{n}{\frac{\frac{1}{n} \sum_{i=1}^n w_i^2}{\left(\frac{1}{n} \sum_{i=1}^n w_i\right)^2}} = \frac{n}{\frac{n \sum_{i=1}^n w_i^2}{(\sum_{i=1}^n w_i)^2}} = \frac{(\sum_{i=1}^n w_i)^2}{\sum_{i=1}^n w_i^2} }[/math]

Design effect for well-known sampling designs

Sampling design dictates how design effect should be calculated

Different sampling designs differ substantially in their impact on estimators (such as the mean) in terms of their bias and variance.

For example, in the cluster sampling case the units may have equal or unequal selection probabilities, irrespective to their intra-class correlation (and their negative effect of increasing the variance of our estimators). In the case of stratified sampling, the probabilities may be equal (EPSEM) or unequal. But regardless, the usage of the prior information on the stratum size in the population, during the sampling stage, could yield statistical efficiency of our estimators. For example: if we know that gender is correlated with our outcome of interest, and also know that the male-female ratio for some population is 50%-50%. Then if we made sure to sample exactly half of each gender, we've thus reduced the variance of the estimators because we've removed the variability caused by unequal proportion of males-females in our sample. Lastly, in case of adjusting to non-coverage, non-response or some stratum split of the population (unavailable during the sampling stage), we may use statistical procedures (E.g.: post-stratification and others). The result of such procedures may lead to estimations of the sampling probabilities that are similar, or very different, than the true sampling probabilities of the units. The quality of these estimators depends on the quality of the auxiliary information and the missing at random assumptions used in creating them. Even when these sampling probability estimators (propensity scores) manage to capture most of the phenomena that has produced them - the impact of the variable selection probabilities on the estimators may be small or large, depending on the data (details in the next section).

Due to the large variety in sampling designs (with or without an effect on unequal selection probabilities), different formulas have been developed to capture the potential design effect, as well as to estimate the correct variance of estimators. Sometimes, these different design effects can be compounded together (as in the case of unequal selection probability and cluster sampling, more details in the following sections). Whether or not to use these formulas, or just assume SRS, depends on to expected amount of bias reduced vs the increase in estimator variance (and in the overhead of methodological and technical complexity).[1]:426

Unequal selection probabilities

Sources for unequal selection probabilities

There are various ways to sample units so that each unit would have the exact same probability of selection. Such methods are called equal probability sampling (EPSEM) methods. Some of the more basic methods include simple random sample (SRS, either with or without replacement) and systematic sampling for getting a fixed sample size. There is also Bernoulli sampling with a random sample size. More advanced techniques such as stratified sampling and cluster sampling can also be designed to be EPSEM. For example, in cluster sampling we can make sure to sample each cluster with probability that is proportional to its size, and then measure all the units inside the cluster. A more complex method for cluster sampling is to use a two-stage sampling by which we sample clusters at the first stage (as before, proportional to cluster size), and sample from each cluster at the second stage using SRS with a fixed proportion (E.g.: sample half of the cluster).[11]:3–8

In their works, Kish and others highlights several known reasons that lead to unequal selection probabilities:[1]:425[8]:185[7]:69[12]:50,395[13]:306

  1. Disproportional sampling due to selection frame or procedure. This happens when a researcher purposefully designs their sample so to over/under sample specific sub-populations or clusters. There are many cases in which this might happen. For example:
    • In stratified sampling when units from some strata are known to have a larger variance than other strata. In such cases, the intention of the researcher may be to use this prior knowledge about the variance between stratum in order to reduce the overall variance of an estimator of some population level parameter of interest (e.g.: the mean). This can be achieved by a strategy known as optimum allocation, in which a strata [math]\displaystyle{ h }[/math] is over sampled proportional to higher standard deviation and lower sampling cost (i.e.: [math]\displaystyle{ f_h \propto \frac{S_h}{\sqrt{C_h}} }[/math], where [math]\displaystyle{ S_h }[/math] is the standard deviation of the outcome in [math]\displaystyle{ h }[/math], and [math]\displaystyle{ C_h }[/math] relates to the cost of recruiting one element from [math]\displaystyle{ h }[/math]). An example of an optimum allocation is Neyman's optimal allocation which, when cost is fixed for recruiting each stratum, the sample size is: [math]\displaystyle{ n_h = n\frac{W_h S_{Uh}}{\sum_h W_h S_{Uh}} }[/math]. Where the summation is over all strata; n is the total sample size; [math]\displaystyle{ n_h }[/math] is the sample size for stratum h; [math]\displaystyle{ W_h = \frac{N_h}{N} }[/math] the relative size of stratum h as compared to the entire population N; and [math]\displaystyle{ S_{Uh} }[/math] is the standard error of in stratum h. A related concept to optimum design is optimal experimental design.
    • If there is interest in comparing two stratum (E.g.: people from two specific socio-demographic groups, or from two regions, etc.), in which case the smaller group may be over-sampled. This way, the variance of the estimator that compares the two groups is reduced.
    • In cluster sampling there may be clusters of different sizes but the procedure samples from all clusters using SRS, and all elements in the cluster are measured (for example, if the cluster sizes are not known upfront at the stage of sampling).
    • When using a two-stage sampling so that in the first stage the clusters are sampled proportionally to their size (a.k.a.: PPS Probability Proportional to Size), but then at the second stage only a specific fixed number of units (E.g.: one or two) are selected from each cluster - this may happen due to convenience/budget considerations. A similar case is when the first stage attempt to sample using PPS, but the number of elements in each unit are inaccurate (so that some smaller cluster may have a higher-than-it-should chance of being selected. And vis-versa for larger clusters with too-small of a chance to be sampled). In such cases, the larger the errors in the sampling frame in the first stage - the larger will be the needed unequal selection probabilities.[6]:109
    • When the frame used for sampling includes duplication of some of the items, thus leading some items to have a larger probability than others to be sampled (E.g.: if the sampling frame was created by merging several lists. Or if recruiting users from several ad channels - in which some of the users are available for recruitment from several of the channels, while others are available to be recruited from only one of the channels). In each of these cases - different units would have different sampling probability, thus making this sampling procedure to not be EPSEM.[11]:3–8[8]:186
    • When several different samples/frames are combined. For example, if running different ad campaigns for recruiting respondents. Or when combining results from several studies done by different researchers and/or at different times (i.e.: Meta-analysis).[8]:188
    When disproportional sampling happens, due to sampling design decisions, the researcher may (sometimes) be able to traceback the decision and accurately calculate the exact inclusion probability. When these selection probabilities are hard to traceback, they may be estimated using some propensity score model combined with information from auxiliary variables (E.g.: age, gender, etc.).
  2. Non-coverage.[1]:527,528 This happens, for example, if people are sampled based on some pre-defined list that doesn't include all the people in the population (E.g.: a phone book or using ads to recruit people to a survey). These missing units are missing due to some failure of creating the sampling frame, as opposed to deliberate exclusion of some people (E.g.: minors, people who cannot vote, etc.). The effect of non-coverage on sampling probability is considered difficult to measure (and adjust for) in various survey situations, unless strong assumptions are made.
  3. Non-response. This refers to the failure of obtaining measurements on sampled units that are intended to be measured. Reasons for non-response are varied and depends on the context. A person may be temporarily unavailable, for example if they are not available to pick up the phone when the survey is done. A person may also refuse to answer the survey due to a variety of reasons, E.g.: different tendencies of people from different ethnic/demographic/socio-economic groups to respond in general; insufficient incentive to spend the time or share data; the identity of the institution that is running the survey; inability to respond (E.g.: due to illness, illiteracy, or a language barrier); respondent is not found (E.g.: they've moved an appartement); the response was lost/destroyed during encoding or transmission (i.e.: measurement error). In the context of surveys, these reasons may be related to answering the entire survey or just specific questions.[1]:532[8]:186
  4. statistical adjustments. These may include methods such as post-stratification, raking, or propensity score (estimation) models - used to perform an ad-hoc adjustment of the sample to some known (or estimated) stratum sizes. Such procedures are used to mitigate issues in the sampling ranging from sampling error, under-coverage of the sampling frame to non-response.[14]:45[15] For example, if a simple random sample is used, a post-stratification (using some auxiliary information) does not offer an estimator that is uniformly better than just an unweighted estimator. However, it can be viewed as a more "robust" estimator.[16] Alternatively, these methods can be used to make the sample more similar to some target "controls" (i.e.: population of interest), a process also known as "standardization".[8]:187 In such cases, these adjustments help with providing unbiased estimators (often with the cost of increased variance, as seen in the following sections). If the original sample is a nonprobability sampling, then post-stratification adjustments are just similar to an ad-hoc quota sampling.[8]:188,189

When the sampling design is fully known (leading to some [math]\displaystyle{ p_h }[/math] probability of selection for some element from strata h), and the non-response is measurable (i.e.: we know that only [math]\displaystyle{ r_h }[/math] observations answered in strata h), then an exactly known inverse probability weight can be calculated for each element i from strata h using:[math]\displaystyle{ w_i = \frac{1}{p_h r_h} }[/math].[8]:186 Sometimes a statistical adjustment, such as post-stratification or raking, is used for estimating the selection probability. E.g.: when comparing the sample we have with same target population, also known as matching to controls. The estimation process may be focused only on adjusting the existing population to an alternative population (for example, if trying to extrapolate from a panel drawn from several regions to an entire country). In such a case, the adjust might be focused on some calibration factor [math]\displaystyle{ c_i }[/math] and the weights be calculated as [math]\displaystyle{ w_i = \frac{c_i}{p_h r_h} }[/math].[8]:187 However, in other cases, both the under-coverage and non-response are all modeled in one go as part of the statistical adjustment, which leads to an estimation of the overall sampling probability (let's say [math]\displaystyle{ p_i' }[/math]). In such a case, the weights are simply: [math]\displaystyle{ w_i = \frac{1}{p_i'} }[/math]. Notice that when statistical adjustments are used, [math]\displaystyle{ w_i }[/math] is often estimated based on some model. The formulation in the following sections assume this [math]\displaystyle{ w_i }[/math] is known, which is not true for statistical adjustments (since we only have [math]\displaystyle{ \widehat w_i }[/math]). However, if it is assumed that the estimation error of [math]\displaystyle{ \widehat w_i }[/math] is very small then the following sections can be used as if it was known. Having this assumption be true depends on the size of the sample used for modeling, and is worth keeping in mind during analysis.

When the selection probabilities may be different, the sample size is random, and the pairwise selection probabilities are independent, we call this Poisson sampling.[17]

"Design based" vs "model based" for describing properties of estimators

When adjusting for unequal probability selection through "individual case weights" (E.g.: inverse probability weighting), we get various types of estimators for quantities of interest. Estimators such as Horvitz–Thompson estimator yield unbiased estimators (if the selection probabilities are indeed known, or approximately known), for total and the mean of the population. Deville and Särndal (1992) coined the term "calibration estimator" for estimators using weights such that they satisfy some condition, such as having the sum of weights equal the population size. And more generally, that the weighted sum of weights is equal some quantity of an auxiliary variable: [math]\displaystyle{ \sum w_ix_i = X }[/math] (e.g.: that the sum of weighted ages of the respondents is equal to the population size in each age bucket).[18][15]:132[19]:1

The two primary ways to argue about the properties of calibration estimators are:[15]:133–134[20]

  1. randomization based (or, sampling design based) - in these cases, the weights ([math]\displaystyle{ w_i }[/math]) and values of the outcome of interest [math]\displaystyle{ y_i }[/math] that are measured in the sample are all treated as known. In this framework, there is variability in the (known) values of the outcome (Y). However, the only randomness comes from which of the elements in the population were picked into the sample (often denoted as [math]\displaystyle{ I_i }[/math], getting 1 if element [math]\displaystyle{ i }[/math] is in the sample and 0 if it is not). For a simple random sample, each [math]\displaystyle{ I_i }[/math] will be an i.i.d bernoulli distribution with some parameter [math]\displaystyle{ p }[/math]. For general EPSEM (equal probability sampling) [math]\displaystyle{ I_i }[/math] will still be bernoulli with some parameter [math]\displaystyle{ p }[/math], but they will no longer be independent random variables. For something like post stratification, the number of elements at each strata can be modeled as a multinomial distribution with different [math]\displaystyle{ p_h }[/math] inclusion probabilities for each element belonging to some strata [math]\displaystyle{ h }[/math]. In these cases the sample size itself can be a random variable.
  2. model based - in these cases the sample is fixed, the weights are fixed, but the outcome of interest is treated as a random variable. For example, in the case of post-stratification, the outcome can be modeled as some linear regression function where the independent variables are indicator variables mapping each observation to its relevant strata, and the variability comes with the error term.

As we will see later, some proofs in the literature rely on the randomization-based framework, while others focus on the model-based perspective. When moving from the mean to the weighted mean, more complexity is added. For example, in the context of survey methodology oftentimes the population size itself is considered an unknown quantity that is estimated. So in the calculation of the weighted mean is in fact based on a ratio estimator, with an estimator of the total at the numerator and an estimator of the population size in the denominator (making the variance calculation to be more complex).[21]

Common types of weights

There are many types (and subtypes) of weights, with different ways to use and interpret them. With some weights their absolute value has some important meaning, while with other weights the important part is the relative values of the weights to each other. This section presents some of the more common types of weights so that they can be referenced in followup sections.

  • Frequency weights are a basic type of weighting, presented in introduction to statistics courses. With these, each weight is an integer number that indicates the absolute frequency of an item in the sample. These are also sometimes termed repeat (or occurrence) weights. The specific value has an absolute meaning that is lost if the weights are transformed (e.g.: scaling). For example: if we have the numbers 10 and 20 with the frequency weights values of 2 and 3, then when "spreading" our data it is: 10,10, 20, 20, 20 (with weights of 1 to each of these items). Frequency weights includes the amount of information contained in a dataset, and thus allows things like creating unbiased weighted variance estimation using Bessel's correction. Notice that such weights are often random variables, since the specific number of items we will see from each value in the dataset is random.
  • inverse-variance weighting is when each element is assigned a weight that is the inverse of its (known) variance.[22][8]:187 When all elements have the same expectancy, using such weights for calculating weighted average has the least variance among all weighted averages. In the common formulation, these weights are known and not random (this seems related to reliability weights[definition needed]).
  • Normalized (convex) weights is a set of weights that form a convex combination. I.e.: each weight is a number between 0 and 1, and the sum of all weights is equal to 1. Any set of (non negative) weights can be turned into normalized weights by dividing each weight with the sum of all weights, making these weights normalized to sum to 1.
A related form are weights normalized to sum to sample size (n). These (non-negative) weights sum to the sample size (n), and their mean is 1. Any set of weights can be normalized to sample size by dividing each weight with the average of all weights. These weights have a nice relative interpretation where elements with weight larger than 1 are more "important" (in terms of their relative influence on, say, the weighted mean) then the average observation, while weights smaller than 1 are less "important" than the average observation.
  • Inverse probability weighting is when each element is given a weight that is (proportional) to the inverse probability of selecting that element. E.g., by using [math]\displaystyle{ w_i = \frac{1}{p_i} }[/math].[8]:185 With inverse probability weights, we learn how many items each element "represents" in the target population. Hence, the sum of such weights returns the size of the target population of interest. Inverse probability weights can be normalized to sum to 1 or normalized to sum to the sample size (n), and many of the calculations from the following sections will yield the same results.
When a sample is EPSEM then all the probabilities are equal and the inverse of the selection probability yield weights that are all equal to one another (they are all equal to [math]\displaystyle{ \frac{N}{n}= \frac{1}{f} }[/math], where [math]\displaystyle{ n }[/math] is the sample size and [math]\displaystyle{ N }[/math] is the population size). Such a sample is called a self weighting sample.[8]:193

There are also indirect ways of applying "weighted" adjustments. For example, the existing cases may be duplicated to impute missing observations (e.g.: from non-response), with variance estimated using methods such as multiple imputation. A complementary dealing of data is to remove (give weight of 0) to some cases. For example, when wanting to reduce the influence of over-sampled groups that are less essential for some analysis. Both cases are similar in nature to inverse probability weighting but the application in practice gives more/less rows of data (making the input potentially simpler to use in some software implementation), instead of applying an extra column of weights. Nevertheless, the consequences of such implementations are similar to just using weights. So while in the case of removing observations the data can easily be handled by common software implementations, the case of adding rows requires special adjustments for the uncertainty estimations. Not doing so may lead to erroneous conclusions(i.e.: there is no free lunch when using alternative representation of the underlying issues).[8]:189,190

The term "Haphazard weights", coined by Kish, is used to refer to weights that correspond to unequal selection probabilities, but ones that are not related to the expectancy or variance of the selected elements.[8]:190,191

Haphazard weights with estimated ratio-mean ([math]\displaystyle{ \hat{\bar{Y}} }[/math]) - Kish's design effect

Formula

When taking an unrestricted sample of [math]\displaystyle{ n }[/math] elements, we can then randomly split these elements into [math]\displaystyle{ H }[/math] disjoint stratum, each of them containing some size of [math]\displaystyle{ n_h }[/math] elements so that [math]\displaystyle{ \sum\limits_{h=1}^H n_h = n }[/math]. All elements in each strata [math]\displaystyle{ h }[/math] has some (known) non-negative weight assigned to them ([math]\displaystyle{ w_h }[/math]). The weight [math]\displaystyle{ w_h }[/math] can be produced by the inverse of some unequal selection probability for elements in each strata [math]\displaystyle{ h }[/math] (i.e.: inverse probability weighting following something like post-stratification). In this setting, Kish's design effect, for the increase in variance of the sample weighted mean due to this design (reflected in the weights), versus SRS of some outcome variable y (when there is no correlation between the weights and the outcome, i.e.: haphazard weights) is:[1]:427[8]:191(4.2)

[math]\displaystyle{ D_{eff} = \frac{ n \sum\limits_{h=1}^H (n_h w_h^2) } { (\sum\limits_{h=1}^H n_h w_h)^2 } }[/math]

By treating each item from coming from its own stratum [math]\displaystyle{ \forall h: n_h=1 }[/math], Kish (in 1992) simplified the above formula to the (well known) following version:[8]:191(4.3)[23]:318[4]:8

[math]\displaystyle{ D_{eff} = \frac{n \sum_{i=1}^n w_i^2}{(\sum_{i=1}^n w_i)^2} = \frac{\frac{1}{n} \sum_{i=1}^n w_i^2}{\left(\frac{1}{n} \sum_{i=1}^n w_i\right)^2} = \frac{\overline{w^2}}{\overline{w}^2} }[/math]

This version of the formula is valid when one stratum had several observations taken from it (i.e.: each having the same weight), or when there are just many stratum were each one had one observation taken from it, but several of them had the same probability of selection. While the interpretation is slightly different, the calculation of the two scenarios comes out to be the same.

Notice that Kish's definition of the design effect is closely tied to the coefficient of variation (also termed relative variance, relvariance or relvar for short) of the weights (when using uncorrected (population level) sample standard deviation for estimation). This has several notations in the literature:[8]:191[12]:396

[math]\displaystyle{ D_{eff} = 1 + L = 1 + {C_V}^2 = 1 + relvar(w) = 1 + \frac{V(w)}{{\bar w}^2} }[/math].

Where [math]\displaystyle{ V(w) = \frac{\sum(w_i - \bar w)^2}{n} }[/math] is the population variance of [math]\displaystyle{ w }[/math], and [math]\displaystyle{ \bar w = \frac{\sum w_i}{n} }[/math] is the mean. When the weights are normalized to sample size (so that their sum is equal to n and their mean is equal to 1), then [math]\displaystyle{ {C_V}^2 = V(w) }[/math] and the formula reduces to [math]\displaystyle{ D_{eff} = 1 + V(w) }[/math]. While it is true we assume the weights are fixed, we can think of their variance as the variance of an empirical distribution defined by sampling (with equal probability) one weight from our set of weights (similar to how we would think about the correlation of x and y in a simple linear regression).

[Proof]

[math]\displaystyle{ {C_V}^2 = \left({\frac{s_w}{\bar w}}\right)^2 = \frac{ \frac{\sum_{i=1}^n (w_i - \bar w)^2}{n} } {\bar w ^2} = \frac{ \frac{\sum_{i=1}^n {w_i}^2 - n \bar w ^2}{n} } {\bar w ^2} = \frac{ \overline{w}^2 - \bar w ^2 } {\bar w ^2} = \frac{ \overline{w}^2 } {\bar w ^2} - 1 = D_{eff} - 1 \implies D_{eff} = 1 + {C_V}^2 }[/math]

Assumptions and proofs

The above formula gives the increase in the variance of the weighted mean based on "haphazard" weights, which reflects when y are observations selected using unequal selection probabilities (with no within-cluster correlation, and no relationship to the expectancy or variance of the outcome measurement);[8]:190,191 and y' are the observations we would have had if we got them from simple random sample, then:

[math]\displaystyle{ D_{eff (kish)} =\frac{var\left(\bar{y}_w\right)}{var\left(\bar{y}'\right)} = \frac{var\left(\frac{ \sum\limits_{i=1}^n w_i y_i}{\sum\limits_{i=1}^n w_i} \right)}{ var\left( \frac{\sum\limits_{i=1}^n y_i'}{n} \right)} }[/math]

From a model based perspective,[24] this formula holds when all n observations ([math]\displaystyle{ y_1, ..., y_n }[/math]) are (at least approximately) uncorrelated ([math]\displaystyle{ \forall (i \neq j): cor(y_i, y_j) = 0 }[/math]), with the same variance ([math]\displaystyle{ \sigma^2 }[/math]) in the response variable of interest (y). It also assumes the weights themselves are not a random variable but rather some known constants (E.g.: the inverse of probability of selection, for some pre-determined and known sampling design).

[Proof]

The following is a simplified proof for when there are no clusters (i.e.: no Intraclass correlation between element of the sample) and each strata includes only one observation:[24]

[math]\displaystyle{ \begin{align} var\left(\bar{y}_w\right) & \stackrel{1}{=} var\left(\frac{ \sum\limits_{i=1}^n w_i y_i}{\sum\limits_{i=1}^n w_i} \right) \stackrel{2}{=} var\left( \sum\limits_{i=1}^n w_i' y_i \right) \stackrel{3}{=} \sum\limits_{i=1}^n var\left( w_i' y_i \right) \\ & \stackrel{4}{=} \sum\limits_{i=1}^n w_i'^2 var\left( y_i \right) \stackrel{5}{=} \sum\limits_{i=1}^n w_i'^2 \sigma^2 \stackrel{6}{=} \sigma^2 \sum\limits_{i=1}^n w_i'^2 \stackrel{7}{=} \sigma^2 \frac{\sum\limits_{i=1}^n w_i^2}{\left( \sum\limits_{i=1}^n w_i\right) ^2} \\ & \stackrel{8}{=} \sigma^2 \frac{\sum\limits_{i=1}^n w_i^2}{\left( \sum\limits_{i=1}^n w_i \frac{n}{n} \right) ^2 } \stackrel{9}{=} \sigma^2 \frac{\sum\limits_{i=1}^n w_i^2}{\left( \frac{\sum\limits_{i=1}^n w_i}{n} \right) ^2 n^2} \stackrel{10}{=} \frac{\sigma^2}{n} \frac{\frac{\sum\limits_{i=1}^n w_i^2}{n}}{ \left( \frac{\sum\limits_{i=1}^n w_i}{n} \right) ^2 } \stackrel{11}{=} \frac{\sigma^2}{n} \frac{\overline{w^2}}{ \bar{w}^2 } \stackrel{12}{=} var\left(\bar{y}'\right) D_{eff} \\ & \implies D_{eff (kish)} =\frac{var\left(\bar{y}_w\right)}{var\left(\bar{y}'\right)} \\ \end{align} }[/math]

Transitions:

  1. from definition of the weighted mean.
  2. using normalized (convex) weights definition (weights that sum to 1): [math]\displaystyle{ w_i' = \frac{w_i}{\sum\limits_{i=1}^n w_i} }[/math].
  3. sum of uncorrelated random variables.
  4. If the weights are constants (from the basic properties of the variance). Another way to say it is that the weights are known upfront for each observation i. Namely that we are actually calculating [math]\displaystyle{ var\left(\bar{y}_w | w \right) }[/math]
  5. when all observations have the same variance ([math]\displaystyle{ \sigma^2 }[/math]).

The conditions on y are trivially held if the y observations are i.i.d with the same expectation and variance. In such case we have [math]\displaystyle{ y=y' }[/math], and we can estimate [math]\displaystyle{ var\left(\bar{y}_w\right) }[/math] by using [math]\displaystyle{ \overline{var\left(\bar{y}_w\right)} = \overline{var\left(\bar{y}\right)} \times D_{eff} }[/math].[8][25] If the y's are not all with the same expectations then we cannot use the estimated variance for calculation, since that estimation assumes that all [math]\displaystyle{ y_i }[/math]s have the same expectation. Specifically, if there is a correlation between the weights and the outcome variable y, then it means that the expectation of y is not the same for all observations (but rather, dependent on the specific weight value for each observation). In such a case, while the design effect formula might still be correct (if the other conditions are met), it would require a different estimator for the variance of the weighted mean. For example, it might be better to use a weighted variance estimator.

If different [math]\displaystyle{ y_i }[/math]s have different variances, then while the weighted variance could capture the correct population-level variance, the Kish's formula for the design effect may no longer be true.

A similar issue happens if there is some correlation structure in the samples (such as when using cluster sampling).

Alternative definitions in the literature

It is worth noting that some sources in the literature give the following alternative definition to Kish's design effect, stating it is: "the ratio of the variance of the weighted survey mean under disproportionate stratified sampling to the variance under proportionate stratified sampling when all stratum unit variances are equal".[23]:318[12]:396

This definition can be slightly misleading, since it might be interpreted to mean that "proportionate stratified sampling" was achieved via stratified sampling, in which a pre-determined number of units is selected from each stratum. Such selection will yield reduced variance (as compared with simple random sample), since it removes some of the uncertainty in the specific number of elements per stratum. This is different than Kish's original definition which compared the variance of the design to a simple random sample (which would yield approximately probability proportionate to sample, but not exactly - due to the variance in sample sizes in each stratum). Park and Lee (2006) reflects on this by stating that "The rationale behind the above derivation is that the loss in precision of [the weighted mean] due to haphazard unequal weighting can be approximated by the ratio of the variance under disproportionate stratified sampling to that under the proportionate stratified sampling".[4]:8 How far are these two definitions differ from each other is not mentioned in the literature.[citation needed] In his book from 1977, Cochran provides a formula for the proportional increase in variance due to deviation from optimum allocation (what, it Kish's formulas, would be called L).[2]:116 However, the connection from that formula to Kish's L is not apparent.[citation needed]

Alternative naming conventions

Earlier papers would use the term [math]\displaystyle{ Deff }[/math].[8]:192 As more definitions of design effect appeared, Kish's design effect for unequal selection probabilities was denoted [math]\displaystyle{ Deff_{kish} }[/math] (or [math]\displaystyle{ Deft_{kish}^2 }[/math]) or simply [math]\displaystyle{ deff_{K} }[/math] for short.[4]:8[12]:396[23]:318 Kish's design effect is also known as the "Unequal Weighting Effect" (or just UWE), termed by Liu et al. in 2002.[26]:2124

When the outcome correlates with the selection probabilities

Spencer's Deff for estimated total ([math]\displaystyle{ \hat Y }[/math])

The estimator for the total is the "p-expanded with replacement" estimator (a.k.a.: pwr-estimator or Hansen and Hurwitz). It is based on a simple random sample (with replacement, denoted SIR) of m items ([math]\displaystyle{ y_k }[/math]) from a population of size M. Each item has a probability of [math]\displaystyle{ p_k }[/math] (k from 1 to N) to be drawn in a single draw ([math]\displaystyle{ \sum_U p_k = 1 }[/math], i.e.: it's a multinomial distribution). The probability that a specific [math]\displaystyle{ y_k }[/math] will appear in our sample is [math]\displaystyle{ p_k }[/math]. The "p-expanded with replacement" value is [math]\displaystyle{ Z_i = \frac{y_k}{p_k} }[/math] with the following expectancy: [math]\displaystyle{ E[Z_i] = E[I_i \frac{y_k}{p_k}] = \frac{y_k}{p_k} E[I_i] = \frac{y_k}{p_k} p_k = y_k }[/math]. Hence [math]\displaystyle{ \hat Y_{pwr} = \frac{1}{m} \sum_i^m Z_i }[/math], the pwr-estimator, is an unbiased estimator for the sum total of y.[2]:51

In 2000, Bruce D. Spencer proposed a formula for estimating the design effect for the variance of estimating the total (not the mean) of some quantity ([math]\displaystyle{ \hat Y }[/math]), when there is correlation between the selection probabilities of the elements and the outcome variable of interest.[27]

In this setup, a sample of size n is drawn (with replacement) from a population of size N. Each item is drawn with probability [math]\displaystyle{ P_i }[/math] (where [math]\displaystyle{ \sum_{i=1}^N P_i = 1 }[/math], i.e.: multinomial distribution). The selection probabilities are used to define the Normalized (convex) weights: [math]\displaystyle{ w_i = \frac{1}{nP_i} }[/math]. Notice that for some random set of n items, the sum of weights will be equal to 1 only by expectancy ([math]\displaystyle{ E[w_i]=1 }[/math]) with some variability of the sum around it (i.e.: the sum of elements from a poisson binomial distribution). The relationship between [math]\displaystyle{ y_i }[/math] and [math]\displaystyle{ P_i }[/math] is defined by the following (population) simple linear regression:

[math]\displaystyle{ y_i = \alpha + \beta P_i + \epsilon_i }[/math]

Where [math]\displaystyle{ y_i }[/math] is the outcome of element i, which linearly depends on [math]\displaystyle{ P_i }[/math] with the intercept [math]\displaystyle{ \alpha }[/math] and slope [math]\displaystyle{ \beta }[/math]. The residual from the fitted line is [math]\displaystyle{ \epsilon_i = y_i - (\alpha + \beta P_i) }[/math]. We can also define the population variances of the outcome and the residuals as [math]\displaystyle{ \sigma^2_y }[/math] and [math]\displaystyle{ \sigma^2_\epsilon }[/math]. The correlation between [math]\displaystyle{ P_i }[/math] and [math]\displaystyle{ y_i }[/math] is [math]\displaystyle{ \rho_{y,P} }[/math].

Spencer's (approximate) design effect, for estimating the total of y, is:[27]:138[28]:4[12]:401

[math]\displaystyle{ Deff_{Spencer} = (1- \hat \rho^2_{y,P})(1 + L) + \left(\frac{\hat \alpha}{\hat \sigma_y}\right)^2 L }[/math]

Where:

  • [math]\displaystyle{ \hat \rho^2_{y,P} }[/math] estimates [math]\displaystyle{ \rho^2_{y,P} }[/math]
  • [math]\displaystyle{ \hat \alpha }[/math] estimates the slope [math]\displaystyle{ \alpha }[/math]
  • [math]\displaystyle{ \hat \sigma_y }[/math] estimates the population variance [math]\displaystyle{ \sigma_y }[/math], and
  • L is the relative variance of the weights, as defined in Kish's formula: : [math]\displaystyle{ L = cv_w^2 = relvar(w) = \frac{V(w)}{{\bar w}^2} }[/math].

This assumes that the regression model fits well so that the probability of selection and the residuals are independent, since it leads to the residuals, and the square residuals, to be uncorrelated with the weights. I.e.: that [math]\displaystyle{ \rho_{\epsilon,W} = 0 }[/math] and also [math]\displaystyle{ \rho_{\epsilon^2,W} = 0 }[/math].[27]:138

When the population size (N) is very large, the formula can be written as:[23]:319

[math]\displaystyle{ Deff_{Spencer} = (1 - \hat \rho^2_{y,P})(1 + cv_w^2) + \left(\frac{1}{cv_Y^2}\right)^2 cv_w^2 }[/math]

(since [math]\displaystyle{ \alpha = \bar Y - \beta \times \bar P = \bar Y - \beta \times \frac{1}{N} \approx \bar Y }[/math], where [math]\displaystyle{ cv_Y^2 = \frac{\sigma^2_Y}{\bar Y} }[/math])

This approximation assumes that the linear relationship between P and y holds. And also that the correlation of the weights with the errors, and the errors squared, are both zero. I.e.: [math]\displaystyle{ \rho_{w,e} = 0 }[/math] and [math]\displaystyle{ \rho_{w,e^2} = 0 }[/math].[28]:4

We notice that if [math]\displaystyle{ \hat \rho_{y,P} \approx 0 }[/math], then [math]\displaystyle{ \hat \alpha \approx \bar y }[/math] (i.e.: the average of y). In such a case, the formula reduces to

[math]\displaystyle{ Deff_{Spencer} = (1 + L) + \left(\frac{1}{relvar(y)}\right)^2 L }[/math]

Only if the variance of y is much larger than its mean then the right-most term is close to 0 (i.e.: [math]\displaystyle{ relvar(y) = \frac{\sigma_y}{\bar Y} \approx 0 }[/math]), which reduces Spencer's design effect (for the estimatoed total) to be equal to Kish's design effect (for the ratio means):[28]:5 [math]\displaystyle{ Deff_{Spencer} \approx (1 + L) = Deff_{Kish} }[/math]. Otherwise, the two formula's will yield different results, which demonstrates the difference between the design effect of the total vs the one of the mean.

Park and Lee's Deff for estimated ratio-mean ([math]\displaystyle{ \hat{\bar{Y}} }[/math])

In 2001, Park and Lee extended Spencer's formula to the case of the ratio-mean (i.e.: estimating the mean by dividing the estimator of the total with the estimator of the population size). It is:[28]:4

[math]\displaystyle{ Deff_{Park\&Lee} = (1 - \hat \rho^2_{y,P})(1 + cv_w^2) + \frac{\hat \rho_{y,P}^2}{cv_P^2} cv_w^2 }[/math]

Where:

  • [math]\displaystyle{ cv_P^2 }[/math] is the (estimated) coefficient of variation of the probabilities of selection.

Park and Lee's formula is exactly equal to Kish's formula when [math]\displaystyle{ \hat \rho_{y,P}^2 = 0 }[/math]. Both formula's relate to the design effect of the mean of y (while Spencer's Deff relates to the estimation of the total).

In general, the Deff for the total ([math]\displaystyle{ \hat{Y} }[/math]) tends to be less efficient than the Deff for the ratio mean ([math]\displaystyle{ \hat{\bar{Y}} }[/math]) when [math]\displaystyle{ \rho_{y,P} }[/math] is small. And in general, [math]\displaystyle{ \rho_{y,P} }[/math] impacts the efficiency of both design effects.[4]:8

Cluster sampling

For data collected using cluster sampling we assume the following structure:

  • [math]\displaystyle{ n_k }[/math] observations in each cluster and K clusters, and with a total of [math]\displaystyle{ n = \sum n_k }[/math] observations.
  • The observations have a block correlation matrix in which every pair of observations from the same cluster is correlated with an intra-class correlation of [math]\displaystyle{ \rho }[/math], while every pair from difference clusters are uncorrelated.[29] I.e., for every pair of observations, [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math], if they belong to the same cluster [math]\displaystyle{ k }[/math], we get [math]\displaystyle{ cov(y_i, y_j) = \rho \sigma^2 }[/math]. And two items from two different clusters are not correlated, i.e.: [math]\displaystyle{ cov(y_i, y_j) = 0 }[/math].
  • An element from any cluster is assumed to have the same variance: [math]\displaystyle{ var(y_i) = \sigma_h^2 = \sigma^2 }[/math].

When clusters are all of the same size [math]\displaystyle{ n^* }[/math], the design effect Deff, proposed by Kish in 1965 (and later re-visited by others), is given by:[1]:162[12]:399[4]:9[30][31][13]:241

[math]\displaystyle{ D_\text{eff} = 1 + (n^* - 1) \rho . }[/math]

It is sometimes also denoted as [math]\displaystyle{ Deff_C }[/math].[26]:2124

In various papers, when cluster sizes are not equal, the above formula is also used with [math]\displaystyle{ n^* }[/math] as the average cluster size (it is also sometimes denoted as [math]\displaystyle{ \bar b }[/math]).[32][24]:105 In such cases, Kish's formula (using the average cluster weight) serves as a conservative (upper bound) of the exact design effect.[24]:106

Alternative formulas exists for unequal cluster sizes.[1]:193 Followup work had discussed the sensitivity of using the average cluster size with various assumptions.[33]

Unequal selection probabilities [math]\displaystyle{ \times }[/math] Cluster sampling

In his paper from 1987, Kish proposed a combined design effect that incorporates both the effects due to weighting that accounts for unequal selection probabilities as well as cluster sampling:[32][24]:105[34]:4[28]:2

[math]\displaystyle{ Deff_{Kish} = \frac{ n \sum\limits_{h=1}^H (n_h w_h^2) } { \sum\limits_{h=1}^H (n_h w_h)^2 } \left( 1 + (n^* - 1) \rho \right) = deff_k \times deff_C }[/math]

With notations similar to above.

This formula received a model based justification, proposed in 1999 by Gabler et al.[24]

Stratified sampling [math]\displaystyle{ \times }[/math] unequal selection probabilities [math]\displaystyle{ \times }[/math] Cluster sampling

In 2000, Liu and Aragon proposed a decomposition of unequal selection probabilities design effect for different strata in stratified sampling.[35] In 2002, Liu et al. extended that work to account for stratified sample were within each strata is a set of unequal selection probability weights. The cluster sampling is either global or per strata.[26] Similar work was done also by Park et al. in 2003.[36]

Uses

Deff is primarily used for several purposes:[13]:85

  • When developing the design - to evaluate its efficiency. I.e.: if there is potentially "too much" increase in variance due to some decision, or if the new design is more efficient (e.g.: as in stratified sampling).
  • As a way for guiding sample size (overall, per stratum, per cluster, etc.), and also
  • When evaluating potential problems with a post-hoc weighting analysis (E.g.: from non-response adjustments).[6] There is no universal rule-of-thumb for which design effect value is "too high", but the literature indicates that [math]\displaystyle{ Deff \gt 1.5 }[/math] is likely to lead to some attention.[12]:396

In his 1995 paper, Kish proposed the following categorization of when Deff is, and is not, useful:[7]:57–62

  • Design effect is unnecessary when: the source population is closely i.i.d, or when the sample design of the data was drawn as a simple random sample. It is also less useful when the sample size is relatively small (at least partially, for practical reasons). And also if only descriptive statistics are of interest (i.e.: point estimation). It is also suggested that if standard errors are needed for only a handful of statistics, it may be o.k. to ignore Deff.
  • Design effect is necessary when: averaging sampling errors for different variables measured on the same survey. OR when averaging the same measured quantity from several surveys over a period of time. Or when extrapolating from the error of simple statistics (e.g.: the mean) to more complex ones (e.g.: regression coefficients). When designing a future survey (but with proper caution). As an aiding statistic to identify glaring issues with the data or its analysis (e.g.: ranging from mistakes to the presence of Outliers).[8]:191

When planning the sample size, work has been done to correct the design effect so to separates the interviewer effect (measurement error) from the effects of the sampling design on the sampling variance.[37]

While Kish originally hoped to have the design effect be able to be agnostic as possible to the underlying distribution of the data, sampling probabilities, their correlations, and the statistics of interest - followup research has shown that these do influence the design effect. Hence, careful attention to these properties should be taken into account when deciding which Deff calculation to use, and how to use it.[4]:13[28]:6

Software implementations

Kish's design effect is implemented in various statistical software:

History

The term "Design effect" was introduced by Leslie Kish in 1965 in his book "Survey Sampling".[1]:88,258 In his paper from 1995,[7]:73 Kish mentions that a similar concept, termed "Lexis ratio", was described at the end of the 19th century. The closely related Intraclass correlation was described by Fisher in 1950, while computations of ratios of variances were already published by Kish and others from the late 40s to the 50s. One of the precursors for Kish's definition was the work done by Cornfield in 1951.[39][4]

In his original book from 1965, Kish proposed the general definition for the design effect (ratio of variances of two estimators, one from a sample with some design and the other from a simple random sample). In his book, Kish proposed the formula for the design effect of cluster sampling (with intraclass correlation);[1]:162 as well as the famous design effect formula for unequal probability sampling.[1]:427 These are often known as "Kish's design effect", and have been merged later into a single formula.

See also

References

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 Kish, Leslie (1965). Survey Sampling. New York: John Wiley & Sons, Inc.. ISBN 0-471-10949-5. 
  2. 2.0 2.1 2.2 2.3 2.4 Carl-Erik Sarndal; Bengt Swensson; Jan Wretman (1992). Model Assisted Survey Sampling. Springer. ISBN 978-0-387-97528-3. 
  3. Heo, Moonseong; Kim, Yongman; Xue, Xiaonan; Kim, Mimi Y. (2010). "Sample size requirement to detect an intervention effect at the end of follow-up in a longitudinal cluster randomized trial". Statistics in Medicine 29 (3): 382–390. doi:10.1002/sim.3806. PMID 20014353. http://www3.interscience.wiley.com/journal/123212319/abstract. 
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 Park, Inho, and Hyunshik Lee. "Design effects for the weighted mean and total estimators under complex survey sampling." Quality control and applied statistics 51.4 (2006): 381–384 (based on google scholar). Vol. 30, No. 2, pp. 183-193. Statistics Canada, Catalogue No. 12-001. Survey Methodology December 2004 (based on the PDF) (pdf)
  5. Everitt, B.S. (2002) The Cambridge Dictionary of Statistics, 2nd Edition. CUP. ISBN:0-521-81099-X
  6. 6.0 6.1 6.2 Kalton, G., J. M. Brick, and T. Le. "Estimating components of design effects for use in sample design. In household sample surveys in developing and transition countries,(Sales No. E. 05. XVII. 6). Department of Economic and Social Affairs." Statistics Division, United Nations, New York (2005). (pdf)
  7. 7.0 7.1 7.2 7.3 7.4 Kish, Leslie. "Methods for design effects." Journal of official Statistics 11.1 (1995): 55 (pdf)
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 8.12 8.13 8.14 8.15 8.16 8.17 8.18 8.19 8.20 Kish, Leslie, and J. Official Stat. "Weighting for unequal Pi." (1992): 183–200. (pdf link)
  9. Tom Leinster (18 December 2014). "Effective Sample Size". https://golem.ph.utexas.edu/category/2014/12/effective_sample_size.html. 
  10. "Design Effects and Effective Sample Size". http://docs.displayr.com/wiki/Design_Effects_and_Effective_Sample_Size. 
  11. 11.0 11.1 Source: Frerichs, R.R. Rapid Surveys (unpublished), © 2004. N, chapter 4 - Equal Probability of Selection (pdf)
  12. 12.0 12.1 12.2 12.3 12.4 12.5 12.6 Valliant, Richard, Jill A. Dever, and Frauke Kreuter. Practical tools for designing and weighting survey samples. New York: Springer, 2013.
  13. 13.0 13.1 13.2 Cochran, W. G. (1977). Sampling Techniques (3rd ed.). Nashville, TN: John Wiley & Sons. ISBN:978-0-471-16240-7
  14. Dever, Jill A., and Richard Valliant. "A comparison of variance estimators for post-stratification to estimated control totals." Survey Methodology 36.1 (2010): 45-56. (pdf)
  15. 15.0 15.1 15.2 Kott, Phillip S. "Using calibration weighting to adjust for nonresponse and coverage errors." Survey Methodology 32.2 (2006): 133. (pdf)
  16. Holt, David, and TM Fred Smith. "Post stratification." Journal of the Royal Statistical Society, Series A (General) 142.1 (1979): 33-46. (pdf)
  17. Ghosh, Dhiren, and Andrew Vogt. "Sampling methods related to Bernoulli and Poisson Sampling." Proceedings of the Joint Statistical Meetings. American Statistical Association Alexandria, VA, 2002. (pdf)
  18. Deville, Jean-Claude, and Carl-Erik Särndal. "Calibration estimators in survey sampling." Journal of the American statistical Association 87.418 (1992): 376-382.
  19. Brick, J. Michael, Jill Montaquila, and Shelley Roth. "Identifying problems with raking estimators." annual meeting of the American Statistical Association, San Francisco, CA. 2003. (pdf)
  20. Keiding, Niels, and David Clayton. "Standardization and control for confounding in observational studies: a historical perspective." Statistical Science (2014): 529-558. (pdf)
  21. Thomas Lumley (https://stats.stackexchange.com/users/249135/thomas-lumley), How to estimate the (approximate) variance of the weighted mean?, URL (version: 2021-05-25): link
  22. Kalton, Graham. "Standardization: A technique to control for extraneous variables." Journal of the Royal Statistical Society, Series C (Applied Statistics) 17.2 (1968): 118-136.
  23. 23.0 23.1 23.2 23.3 Henry, Kimberly A., and Richard Valliant. "A design effect measure for calibration weighting in single-stage samples." Survey Methodology 41.2 (2015): 315-331. (pdf)
  24. 24.0 24.1 24.2 24.3 24.4 24.5 Gabler, Siegfried, Sabine Häder, and Partha Lahiri. "A model based justification of Kish's formula for design effects for weighting and clustering." Survey Methodology 25 (1999): 105–106. (pdf)
  25. Little, Roderick J., and Sonya Vartivarian. "Does weighting for nonresponse increase the variance of survey means?." Survey Methodology 31.2 (2005): 161. pdf link
  26. 26.0 26.1 26.2 Liu, Jun, Vince Iannacchione, and Margie Byron. "Decomposing design effects for stratified sampling." Proceedings of the survey research methods section, american statistical association. 2002. (pdf)
  27. 27.0 27.1 27.2 Spencer, Bruce D. "An approximate design effect for unequal weighting when measurements may correlate with selection probabilities." Survey Methodology 26 (2000): 137-138. (pdf)
  28. 28.0 28.1 28.2 28.3 28.4 28.5 Park, Inho, and Hyunshik Lee. "The design effect: do we know all about it." Proceedings of the Annual Meeting of the American Statistical Association. 2001. (pdf)
  29. Alexander K. Rowe; Marcel Lama; Faustin Onikpo; Michael S. Deming (2002). "Design effects and intraclass correlation coefficients from a health facility cluster survey in Benin". International Journal for Quality in Health Care 14 (6): 521–523. doi:10.1093/intqhc/14.6.521. PMID 12515339. 
  30. Bland, M (2005), "Cluster randomised trials in the medical literature", Notes for talks, York Univ
  31. Methods in Sample Surveys (pages 5–6)
  32. 32.0 32.1 Kish, L. (1987). Weighting in [math]\displaystyle{ Deft^2 }[/math]. The Survey Statistician, June 1987. (this paper doesn't seem to be available online, but is references in several places as the original source of this formula)
  33. Lynn, Peter, and Siegfried Gabler. Approximations to b* in the prediction of design effects due to clustering. No. 2004-07. ISER Working Paper Series, 2004. (pdf)
  34. Gabler, Siegfried, Sabine Hader, and Peter Lynn. Design effects for multiple design samples. No. 2005-12. ISER Working Paper Series, 2005. (pdf)
  35. Liu, J., and E. Aragon. "Subsampling strategies in longitudinal surveys." Proceedings of the Survey Research Methods Section, American Statistical Association. 2000. (pdf)
  36. "Design effects and survey planning". 2003. http://www.asasrms.org/Proceedings/y2003/Files/JSM2003-000820.pdf. 
  37. Zins, Stefan, and Jan Pablo Burgard. "Considering interviewer and design effects when planning sample sizes." SURVEY METHODOLOGY 46.1 (2020): 93-119. (paper - html)
  38. Sarig T, Galili T, Eilat R. balance--a Python package for balancing biased data samples. arXiv preprint arXiv:2307.06024. 2023 Jul 12.
  39. Cochran, William G. "Modern methods in the sampling of human populations." American journal of public health and the nation's health 41.6 (1951): 647–668.

Further reading