Proportional hazards model

From HandWiki
Short description: Class of statistical survival models

Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated).


Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted [math]\displaystyle{ \lambda_0(t) }[/math], describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding.

The proportional hazards condition[1] states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time [math]\displaystyle{ t }[/math], while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of [math]\displaystyle{ \lambda_0(t) }[/math]. The covariate is not restricted to binary predictors; in the case of a continuous covariate [math]\displaystyle{ x }[/math], it is typically assumed that the hazard responds exponentially; each unit increase in [math]\displaystyle{ x }[/math] results in proportional scaling of the hazard.

The Cox model

The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios.

Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s) without any consideration of the hazard function. This approach to survival data is called application of the Cox proportional hazards model,[2] sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky.[3][4]

Let Xi = (Xi1, … , Xip) be the realized values of the covariates for subject i. The hazard function for the Cox proportional hazards model has the form

[math]\displaystyle{ \lambda(t|X_i) = \lambda_0(t)\exp(\beta_1X_{i1} + \cdots + \beta_pX_{ip}) = \lambda_0(t)\exp(X_i \cdot \beta). }[/math]

This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi.

The likelihood of the event to be observed occurring for subject i at time Yi can be written as:

[math]\displaystyle{ L_i(\beta) =\frac{\lambda(Y_i\mid X_i)}{\sum_{j:Y_j\ge Y_i}\lambda(Y_i\mid X_j)} =\frac{\lambda_0(Y_i)\theta_i}{\sum_{j:Y_j\ge Y_i}\lambda_0(Y_i)\theta_j} =\frac{\theta_i}{\sum_{j:Y_j\ge Y_i}\theta_j}, }[/math]

where θj = exp(Xjβ) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β) ≤ 1. This is a partial likelihood: the effect of the covariates can be estimated without the need to model the change of the hazard over time.

Treating the subjects as if they were statistically independent of each other, the joint probability of all realized events[5] is the following partial likelihood, where the occurrence of the event is indicated by Ci = 1:

[math]\displaystyle{ L(\beta) = \prod_{i:C_i=1} L_i(\beta) . }[/math]

The corresponding log partial likelihood is

[math]\displaystyle{ \ell(\beta) = \sum_{i:C_i=1} \left(X_i \cdot \beta - \log \sum_{j:Y_j\ge Y_i}\theta_j\right). }[/math]

This function can be maximized over β to produce maximum partial likelihood estimates of the model parameters.

The partial score function is

[math]\displaystyle{ \ell^\prime(\beta) = \sum_{i:C_i=1} \left(X_i - \frac{\sum_{j:Y_j\ge Y_i}\theta_jX_j}{\sum_{j:Y_j\ge Y_i}\theta_j}\right), }[/math]

and the Hessian matrix of the partial log likelihood is

[math]\displaystyle{ \ell^{\prime\prime}(\beta) = -\sum_{i:C_i=1} \left(\frac{\sum_{j:Y_j\ge Y_i}\theta_jX_jX_j^\prime}{\sum_{j:Y_j\ge Y_i}\theta_j} - \frac{\left[\sum_{j:Y_j\ge Y_i}\theta_jX_j\right] \left[\sum_{j:Y_j\ge Y_i}\theta_jX_j^\prime\right]}{\left[\sum_{j:Y_j\ge Y_i}\theta_j\right]^2}\right). }[/math]

Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.

Tied times

Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method.[6] Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood.

[math]\displaystyle{ L(\beta) = \prod_j \frac{\prod_{i\in H_j}\theta_i}{\prod_{\ell=0}^{m_j-1} \left[\sum_{i:Y_i\ge t_j}\theta_i - \frac{\ell}{m_j} \sum_{i\in H_j} \theta_i\right] }. }[/math]

The corresponding log partial likelihood is

[math]\displaystyle{ \ell(\beta) = \sum_j \left(\sum_{i\in H_j} X_i \cdot \beta -\sum_{\ell=0}^{m_j-1}\log\left(\sum_{i:Y_i\ge t_j}\theta_i - \frac{\ell}{m_j} \sum_{i\in H_j}\theta_i\right)\right), }[/math]

the score function is

[math]\displaystyle{ \ell^\prime(\beta) = \sum_j \left(\sum_{i\in H_j} X_i -\sum_{\ell=0}^{m_j-1}\frac{\sum_{i:Y_i\ge t_j}\theta_iX_i - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_iX_i}{\sum_{i:Y_i\ge t_j}\theta_i - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_i}\right), }[/math]

and the Hessian matrix is

[math]\displaystyle{ \ell^{\prime\prime}(\beta) = -\sum_j \sum_{\ell=0}^{m_j-1} \left(\frac{\sum_{i:Y_i\ge t_j}\theta_iX_iX_i^\prime - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_iX_iX_i^\prime}{\phi_{j,\ell,m_j}} - \frac{Z_{j,\ell,m_j} Z_{j,\ell,m_j}^\prime}{\phi_{j,\ell,m_j}^2}\right), }[/math]


[math]\displaystyle{ \phi_{j,\ell,m_j} = \sum_{i:Y_i\ge t_j}\theta_i - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_i }[/math]
[math]\displaystyle{ Z_{j,\ell,m_j} = \sum_{i:Y_i\ge t_j}\theta_iX_i - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_iX_i. }[/math]

Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero.

Time-varying predictors and coefficients

Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill.Cite error: Closing </ref> missing for <ref> tag[7]

In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards,[8] i.e. specifying

[math]\displaystyle{ \lambda(t|X_i) = \lambda_0(t) + \beta_1X_{i1} + \cdots + \beta_pX_{ip} = \lambda_0(t) + X_i \cdot \beta. }[/math]

If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict [math]\displaystyle{ \lambda(t\mid X_i) }[/math] to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required.

Specifying the baseline hazard function

The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard [math]\displaystyle{ \lambda_0(t) }[/math] is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model.

Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models.

The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast.

Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function,[9] to acknowledge the debt of the entire field to David Cox.

The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model.

Relationship to Poisson models

There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981)[10] provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's[11] book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models.

Under high-dimensional setup

In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter.[12] The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint.

[math]\displaystyle{ \ell(\beta) = \sum_j \left(\sum_{i\in H_j} X_i \cdot \beta -\sum_{\ell=0}^{m_j-1}\log\left(\sum_{i:Y_i\ge t_j}\theta_i - \frac{\ell}{m_j}\sum_{i\in H_j}\theta_i\right)\right) + \lambda \|\beta\|_1 , }[/math]

There has been theoretical progress on this topic recently.[13][14][15][16]

See also


  1. "Analysis of Survival Data under the Proportional Hazards Model". International Statistical Review / Revue Internationale de Statistique 43 (1): 45–57. 1975. doi:10.2307/1402659. 
  2. Cox, David R (1972). "Regression Models and Life-Tables". Journal of the Royal Statistical Society, Series B 34 (2): 187–220. 
  3. Reid, N. (1994). "A Conversation with Sir David Cox". Statistical Science 9 (3): 439–455. doi:10.1214/ss/1177010394. 
  4. "Some remarks on the analysis of survival data". the First Seattle Symposium of Biostatistics: Survival Analysis. 1997. 
  5. "Each failure contributes to the likelihood function", Cox (1972), page 191.
  6. Efron, Bradley (1974). "The Efficiency of Cox's Likelihood Function for Censored Data". Journal of the American Statistical Association 72 (359): 557–565. doi:10.1080/01621459.1977.10480613. 
  7. "timereg: Flexible Regression Models for Survival Data". CRAN. 
  8. "Some remarks on the analysis of survival data". the First Seattle Symposium of Biostatistics: Survival Analysis. 1997. 
  9. Bender, R.; Augustin, T.; Blettner, M. (2006). "Generating survival times to simulate Cox proportional hazards models". Statistics in Medicine 24 (11): 1713–1723. doi:10.1002/sim.2369. PMID 16680804. 
  10. Nan Laird and Donald Olivier (1981). "Covariance Analysis of Censored Survival Data Using Log-Linear Analysis Techniques". Journal of the American Statistical Association 76 (374): 231–240. doi:10.2307/2287816. 
  11. P. McCullagh and J. A. Nelder (2000). "Chapter 13: Models for Survival Data". Generalized Linear Models (Second ed.). Boca Raton, Florida: Chapman & Hall/CRC. ISBN 978-0-412-31760-6.  (Second edition 1989; first CRC reprint 1999.)
  12. Tibshirani, R. (1997). "The Lasso method for variable selection in the Cox model". Statistics in Medicine 16 (4): 385–395. doi:10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3. 
  13. Bradić, J.; Fan, J.; Jiang, J. (2011). "Regularization for Cox's proportional hazards model with NP-dimensionality". Annals of Statistics 39 (6): 3092–3120. doi:10.1214/11-AOS911. PMID 23066171. 
  14. Bradić, J.; Song, R. (2015). "Structured Estimation in Nonparametric Cox Model". Electronic Journal of Statistics 9 (1): 492–534. doi:10.1214/15-EJS1004. 
  15. Kong, S.; Nan, B. (2014). "Non-asymptotic oracle inequalities for the high-dimensional Cox regression via Lasso". Statistica Sinica 24 (1): 25–42. doi:10.5705/ss.2012.240. PMID 24516328. 
  16. Huang, J.; Sun, T.; Ying, Z.; Yu, Y.; Zhang, C. H. (2011). "Oracle inequalities for the lasso in the Cox model". The Annals of Statistics 41 (3): 1142–1165. doi:10.1214/13-AOS1098. PMID 24086091.