11 resultados para Interval censoring
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
L'Anàlisi de la supervivència s'utilitza en diferents camps per analitzar el temps transcorregut entre dos esdeveniments. El que distingeix l'anàlisi de la supervivència d'altres àrees de l'estadística és que les dades normalment estan censurades. La censura en un interval apareix quan l'esdeveniment final d'interès no és directament observable i només se sap que el temps de fallada està en un interval concret. Un esquema de censura més complex encara apareix quan tant el temps inicial com el temps final estan censurats en un interval. Aquesta situació s'anomena doble censura. En aquest article donem una descripció formal d'un mètode bayesà paramètric per a l'anàlisi de dades censurades en un interval i dades doblement censurades així com unes indicacions clares de la seva utilització o pràctica. La metodologia proposada s'ilustra amb dades d'una cohort de pacients hemofílics que es varen infectar amb el virus VIH a principis dels anys 1980's.
Resumo:
En l’anàlisi de la supervivència el problema de les dades censurades en un interval es tracta, usualment,via l’estimació per màxima versemblança. Amb l’objectiu d’utilitzar una expressió simplificada de la funció de versemblança, els mètodes estàndards suposen que les condicions que produeixen la censura no afecten el temps de fallada. En aquest article formalitzem les condicions que asseguren la validesa d’aquesta versemblança simplificada. Així, precisem diferents condicions de censura no informativa i definim una condició de suma constant anàloga a la derivada en el context de censura per la dreta. També demostrem que les inferències obtingudes amb la versemblançaa simplificada són correctes quan aquestes condicions són certes. Finalment, tractem la identificabilitat de la funció distribució del temps de fallada a partir de la informació observada i estudiem la possibilitat de contrastar el compliment de la condició de suma constant.
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
A model-based approach for fault diagnosis is proposed, where the fault detection is based on checking the consistencyof the Analytical Redundancy Relations (ARRs) using an interval tool. The tool takes into account the uncertainty in theparameters and the measurements using intervals. Faults are explicitly included in the model, which allows for the exploitation of additional information. This information is obtained from partial derivatives computed from the ARRs. The signs in the residuals are used to prune the candidate space when performing the fault diagnosis task. The method is illustrated using a two-tank example, in which these aspects are shown to have an impact on the diagnosis and fault discrimination, since the proposed method goes beyond the structural methods
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
This paper analyses the demand for private health care by Spanishhouseholds using a micro budget survey. The methodology used takescare of the three part decision process involved in this type ofbehaviour, namely the decision to use private health care, howoften to do so and how much to spend each time and also the effectsof unobserved heterogeneity. Since the theoretical frameworkcorresponds to the Grossman model of health investment, the resultsalso provide a test of the theory when these issues are considered.Finally, the obtained evidence also suggest that the current systemof tax deductions for private health care expenditures is regressive.
Resumo:
We represent interval ordered homothetic preferences with a quantitative homothetic utility function and a multiplicative bias. When preferences are weakly ordered (i.e. when indifference is transitive), such a bias equals 1. When indifference is intransitive, the biasing factor is a positive function smaller than 1 and measures a threshold of indifference. We show that the bias is constant if and only if preferences are semiordered, and we identify conditions ensuring a linear utility function. We illustrate our approach with indifference sets on a two dimensional commodity space.
Resumo:
The approximants to regular continued fractions constitute `best approximations' to the numbers they converge to in two ways known as of the first and the second kind.This property of continued fractions provides a solution to Gosper's problem of the batting average: if the batting average of a baseball player is 0.334, what is the minimum number of times he has been at bat? In this paper, we tackle somehow the inverse question: given a rational number P/Q, what is the set of all numbers for which P/Q is a `best approximation' of one or the other kind? We prove that inboth cases these `Optimality Sets' are intervals and we give aprecise description of their endpoints.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.