824 resultados para Nonparametric Estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The receiver-operating characteristic (ROC) curve is the most widely used measure for evaluating the performance of a diagnostic biomarker when predicting a binary disease outcome. The ROC curve displays the true positive rate (or sensitivity) and the false positive rate (or 1-specificity) for different cut-off values used to classify an individual as healthy or diseased. In time-to-event studies, however, the disease status (e.g. death or alive) of an individual is not a fixed characteristic, and it varies along the study. In such cases, when evaluating the performance of the biomarker, several issues should be taken into account: first, the time-dependent nature of the disease status; and second, the presence of incomplete data (e.g. censored data typically present in survival studies). Accordingly, to assess the discrimination power of continuous biomarkers for time-dependent disease outcomes, time-dependent extensions of true positive rate, false positive rate, and ROC curve have been recently proposed. In this work, we present new nonparametric estimators of the cumulative/dynamic time-dependent ROC curve that allow accounting for the possible modifying effect of current or past covariate measures on the discriminatory power of the biomarker. The proposed estimators can accommodate right-censored data, as well as covariate-dependent censoring. The behavior of the estimators proposed in this study will be explored through simulations and illustrated using data from a cohort of patients who suffered from acute coronary syndrome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In longitudinal studies of disease, patients may experience several events through a follow-up period. In these studies, the sequentially ordered events are often of interest and lead to problems that have received much attention recently. Issues of interest include the estimation of bivariate survival, marginal distributions and the conditional distribution of gap times. In this work we consider the estimation of the survival function conditional to a previous event. Different nonparametric approaches will be considered for estimating these quantities, all based on the Kaplan-Meier estimator of the survival function. We explore the finite sample behavior of the estimators through simulations. The different methods proposed in this article are applied to a data set from a German Breast Cancer Study. The methods are used to obtain predictors for the conditional survival probabilities as well as to study the influence of recurrence in overall survival.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The average availability of a repairable system is the expected proportion of time that the system is operating in the interval [0, t]. The present article discusses the nonparametric estimation of the average availability when (i) the data on 'n' complete cycles of system operation are available, (ii) the data are subject to right censorship, and (iii) the process is observed upto a specified time 'T'. In each case, a nonparametric confidence interval for the average availability is also constructed. Simulations are conducted to assess the performance of the estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers statistical models in which two different types of events, such as the diagnosis of a disease and the remission of the disease, occur alternately over time and are observed subject to right censoring. We propose nonparametric estimators for the joint distribution of bivariate recurrence times and the marginal distribution of the first recurrence time. In general, the marginal distribution of the second recurrence time cannot be estimated due to an identifiability problem, but a conditional distribution of the second recurrence time can be estimated non-parametrically. In literature, statistical methods have been developed to estimate the joint distribution of bivariate recurrence times based on data of the first pair of censored bivariate recurrence times. These methods are efficient in the current model because recurrence times of higher orders are not used. Asymptotic properties of the estimators are established. Numerical studies demonstrate the estimator performs well with practical sample sizes. We apply the proposed method to a Denmark psychiatric case register data set for illustration of the methods and theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a constrained nonparametric method of estimating an input distance function. A regression function is estimated via kernel methods without functional form assumptions. To guarantee that the estimated input distance function satisfies its properties, monotonicity constraints are imposed on the regression surface via the constraint weighted bootstrapping method borrowed from statistics literature. The first, second, and cross partial analytical derivatives of the estimated input distance function are derived, and thus the elasticities measuring input substitutability can be computed from them. The method is then applied to a cross-section of 3,249 Norwegian timber producers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 62M05

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Asymmetric discrete triangular distributions are introduced in order to extend the symmetric ones serving for discrete associated kernels in the nonparametric estimation for discrete functions. The extension from one to two orders around the mode provides a large family of discrete distributions having a finite support. Establishing a bridge between Dirac and discrete uniform distributions, some different shapes are also obtained and their properties are investigated. In particular, the mean and variance are pointed out. Applications to discrete kernel estimators are given with a solution to a boundary bias problem. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The aim of this article is to assess the effects of several territorial characteristics, specifically agglomeration economies, on industrial location processes in the Spanish region of Catalonia. Theoretically, the level of agglomeration causes economies which favour the location of new establishments, but an excessive level of agglomeration might cause diseconomies, since congestion effects arise. The empirical evidence on this matter is inconclusive, probably because the models used so far are not suitable enough. We use a more flexible semiparametric specification, which allows us to study the nonlinear relationship between the different types of agglomeration levels and location processes. Our main statistical source is the REIC (Catalan Manufacturing Establishments Register), which has plant-level microdata on location of new industrial establishments. Keywords: agglomeration economies, industrial location, Generalized Additive Models, nonparametric estimation, count data models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The focus of the paper is the nonparametric estimation of an instrumental regression function P defined by conditional moment restrictions stemming from a structural econometric model : E[Y-P(Z)|W]=0 and involving endogenous variables Y and Z and instruments W. The function P is the solution of an ill-posed inverse problem and we propose an estimation procedure based on Tikhonov regularization. The paper analyses identification and overidentification of this model and presents asymptotic properties of the estimated nonparametric instrumental regression function.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents calculations of semiparametric efficiency bounds for quantile treatment effects parameters when se1ection to treatment is based on observable characteristics. The paper also presents three estimation procedures forthese parameters, alI ofwhich have two steps: a nonparametric estimation and a computation ofthe difference between the solutions of two distinct minimization problems. Root-N consistency, asymptotic normality, and the achievement ofthe semiparametric efficiency bound is shown for one ofthe three estimators. In the final part ofthe paper, an empirical application to a job training program reveals the importance of heterogeneous treatment effects, showing that for this program the effects are concentrated in the upper quantiles ofthe earnings distribution.