366 resultados para weibull simulaatio
Resumo:
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.
Resumo:
This work presents a characterization of the surface wind climatology over the Iberian Peninsula (IP). For this objective, an unprecedented observational database has been developed. The database covers a period of 6years (2002–2007) and consists of hourly wind speed and wind direction data recorded at 514 automatic weather stations. Theoriginal observations underwent a quality control process to remove rough errors from the data set. In the first step, the annual and seasonal mean behaviour of the wind field are presented. This analysis shows the high spatial variability of the wind as a result of its interaction with the main orographic features of the IP. In order to simplify the characterization of the wind, a clustering procedure was applied to group the observational sites with similar temporal wind variability. A total of 20 regions are identified. These regions are strongly related to the main landforms of the IP. The wind behaviour of each region, characterized by the wind rose (WR), annual cycle (AC) and wind speed histogram, is explained as the response of each region to the main circulation types (CTs) affecting the IP. Results indicate that the seasonal variability of the synoptic scale is related with intra-annual variability and modulated by local features in the WRs variability. The wind speed distribution not always fit to a unimodal Weibull distribution consequence of interactions at different atmospheric scales. This work contributes to a deeper understanding of the temporal and spatial variability of surface winds. Taken together, the wind database created, the methodology used and the conclusion extracted are a benchmark for future works based on the wind behaviour.
Resumo:
This paper is concerned with the analysis of zero-inflated count data when time of exposure varies. It proposes a modified zero-inflated count data model where the probability of an extra zero is derived from an underlying duration model with Weibull hazard rate. The new model is compared to the standard Poisson model with logit zero inflation in an application to the effect of treatment with thiotepa on the number of new bladder tumors.
Resumo:
BACKGROUND: To date, an estimated 10% of children eligible for antiretroviral treatment (ART) receive it, and the frequency of retention in programs is unknown. We evaluated the 2-year risks of death and loss to follow-up (LTFU) of children after ART initiation in a multicenter study in sub-Saharan Africa. METHODS: Pooled analysis of routine individual data from 16 participating clinics produced overall Kaplan-Meier estimates of the probabilities of death or LTFU after ART initiation. Risk factors analysis used Weibull regression, accounting for between-cohort heterogeneity. RESULTS: The median age of 2405 children at ART initiation was 4.9 years (12%, younger than 12 months), 52% were male, 70% had severe immunodeficiency, and 59% started ART with a nonnucleoside reverse transcriptase inhibitor. The 2-year risk of death after ART initiation was 6.9% (95% confidence interval [CI]: 5.9 to 8.1), independently associated with baseline severe anemia (adjusted hazard ratio [aHR]: 4.10 [CI: 2.36 to 7.13]), immunodeficiency (adjusted aHR: 2.95 [CI: 1.49 to 5.82]), and severe clinical status (adjusted aHR: 3.64 [CI: 1.95 to 6.81]); the 2-year risk of LTFU was 10.3% (CI: 8.9 to 11.9), higher in children with severe clinical status. CONCLUSIONS: Once on treatment, the 2-year risk of death is low but the LTFU risk is substantial. ART is still mainly initiated at advanced disease stage in African children, reinforcing the need for early HIV diagnosis, early initiation of ART, and procedures to increase program retention.
Resumo:
BACKGROUND Living at higher altitude was dose-dependently associated with lower risk of ischaemic heart disease (IHD). Higher altitudes have different climatic, topographic and built environment properties than lowland regions. It is unclear whether these environmental factors mediate/confound the association between altitude and IHD. We examined how much of the altitude-IHD association is explained by variations in exposure at place of residence to sunshine, temperature, precipitation, aspect, slope and distance to main road. METHODS We included 4.2 million individuals aged 40-84 at baseline living in Switzerland at altitudes 195-2971 m above sea level (ie, full range of residence), providing 77 127 IHD deaths. Mortality data 2000-2008, sociodemographic/economic information and coordinates of residence were obtained from the Swiss National Cohort, a longitudinal, census-based record linkage study. Environment information was modelled to residence level using Weibull regression models. RESULTS In the model not adjusted for other environmental factors, IHD mortality linearly decreased with increasing altitude resulting in a lower risk (HR, 95% CI 0.67, 0.60 to 0.74) for those living >1500 m (vs<600 m). This association remained after adjustment for all other environmental factors 0.74 (0.66 to 0.82). CONCLUSIONS The benefit of living at higher altitude was only partially confounded by variations in climate, topography and built environment. Rather, physical environment factors appear to have an independent effect and may impact on cardiovascular health in a cumulative way. Inclusion of additional modifiable factors as well as individual information on traditional IHD risk factors in our combined environmental model could help to identify strategies for the reduction of inequalities in IHD mortality.
Resumo:
A multivariate frailty hazard model is developed for joint-modeling of three correlated time-to-event outcomes: (1) local recurrence, (2) distant recurrence, and (3) overall survival. The term frailty is introduced to model population heterogeneity. The dependence is modeled by conditioning on a shared frailty that is included in the three hazard functions. Independent variables can be included in the model as covariates. The Markov chain Monte Carlo methods are used to estimate the posterior distributions of model parameters. The algorithm used in present application is the hybrid Metropolis-Hastings algorithm, which simultaneously updates all parameters with evaluations of gradient of log posterior density. The performance of this approach is examined based on simulation studies using Exponential and Weibull distributions. We apply the proposed methods to a study of patients with soft tissue sarcoma, which motivated this research. Our results indicate that patients with chemotherapy had better overall survival with hazard ratio of 0.242 (95% CI: 0.094 - 0.564) and lower risk of distant recurrence with hazard ratio of 0.636 (95% CI: 0.487 - 0.860), but not significantly better in local recurrence with hazard ratio of 0.799 (95% CI: 0.575 - 1.054). The advantages and limitations of the proposed models, and future research directions are discussed. ^
Resumo:
Two cohorts of amyotrophic lateral sclerosis (ALS) patients were identified. One incidence-based cohort from Harris County, Texas with 97 cases, and the other a clinic referral series from an ALS clinic in Houston, Texas with 439 cases were followed-up to evaluate the prognosis of ALS. The overall Kaplan-Meier 3-year survival after diagnosis was similar, 0.287 for the incidence cohort and 0.313 for the referral cohort. However, the 5-year survival was much lower for the incidence cohort than the referral cohort (0.037 vs. 0.206). The large difference in 5-year survival was thought to be the results of a stronger unfavorable effect of the prognostic factors in the incidence cohort than in the referral cohort.^ Cohort-specific Weibull regression models were derived to evaluate the cohort-specific prognostic factors and survival probability with adjustment of certain prognostic factors.^ The major prognostic factors were: age at diagnosis, bulbar onset, black ethnicity, and positive family history of ALS in both cohorts. Female gender, simultaneous upper and lower extremities onset were specifically unfavorable factors in the incidence cohort. In the incidence cohort the prognosis was relatively favorable for cases with duration from onset to diagnosis longer than 4 months, however in the referral cohort the relatively favorable prognosis only occurred in cases with duration from onset to diagnosis 1 year or longer and was strongest in cases with duration 5 years and longer. Age at diagnosis modified the effect of bulbar onset in the incidence cohort but not in the referral cohort. The estimated survival with presence of an unfavorable prognostic factor identified in the incidence cohort was higher for the referral cohort than for the incidence cohort. Future studies are indicated to investigate the disease heterogeneity issue of ALS based on survival distribution of ALS. ^
Resumo:
Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^
Resumo:
El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.
Resumo:
Quasi-monocrystalline silicon wafers have appeared as a critical innovation in the PV industry, joining the most favourable characteristics of the conventional substrates: the higher solar cell efficiencies of monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost and the full square-shape of the multicrystalline ones. However, the quasi-mono ingot growth can lead to a different defect structure than the typical Cz-Si process. Thus, the properties of the brand-new quasi-mono wafers, from a mechanical point of view, have been for the first time studied, comparing their strength with that of both Cz-Si mono and typical multicrystalline materials. The study has been carried out employing the four line bending test and simulating them by means of FE models. For the analysis, failure stresses were fitted to a three-parameter Weibull distribution. High mechanical strength was found in all the cases. The low quality quasi-mono wafers, interestingly, did not exhibit critical strength values for the PV industry, despite their noticeable density of extended defects.
Resumo:
En el capítulo dedicado a la calidad de la estación, como primera etapa se realiza una exhaustiva revisión bibliográfica a partir de la cual se comprueba que los métodos más precisos para determinar esta calidad, en términos generales, son aquellos basados en la relación de la altura con la edad. En base a los datos que se encuentran disponibles y como fruto de esta revisión bibliográfica, se decide probar tres métodos: uno de ellos que estudia la evolución de la altura en función del diámetro (MEYER) y los otros dos en función de la edad (BAILEY y CLUTTER y GARCÍA). Para probar estos métodos, se utilizan datos de parcelas de producción distribuidas en los tres principales sistemas montañosos de España: Ibérico, Central y Pirenaico. El modelo de MEYER implica una clasificación previa de las parcelas en función de su clase de sitio y los ajustes se realizan para cada clase, con lo que se obtienen cuatro curvas polimórficas. La ecuación que describe este modelo es: H = 1.3 + s(1-e-bD) El modelo de BAILEY y CLUTTER genera también un sistema de curvas polimórficas y se genera a partir de la relación entre el 1og de la altura y la inversa de la edad: log H = a + b¡(1/AC) Este método implica una agrupación o estratificación de las parcelas en clases de sitio. Por último, se prueba el modelo de GARCÍA que se basa en una ecuación diferencial estocástica: dHc(t) = b(ac-Hc(t)) + a(t)dw(t) y se prueba la siguiente estructura paramétrica: a b c °"0 o *o H0 global 1 oca 1 g1obal 0.00 global 0.00 0.0 Posteriormente se hace un análisis de los residuos en que se aplica la prueba de Durbin-Watson que detecta la presencia de correlación serial. Se verifica la forma aproximada a un "cometa" en los residuos, lo que es frecuente en series de tiempo cuando la magnitud analizada (en nuestro caso, la altura dominante) se acumula en el tiempo. Esta prueba no es concluyente en el sentido de que no aclara que modelo es el más apropiado. Finalmente, se validan estos modelos utilizando datos de las parcelas de clara. Para esto se utiliza información de arboles tipo sometidos a análisis de tronco. Esta validación permite concluir que el modelo de GARCÍA es el que mejor explica la evolución del crecimiento en altura. En el capítulo dedicado a las distribuciones diamétricas, se pretende modelizar dichas distribuciones a través de variables de estado. Para esto, como primera etapa, se prueban para 45 parcelas y tres inventarios, las siguientes funciones: - Normal - Gamma - Ln de dos parámetros - Ln de tres parámetros - Beta I I - Wei bul 1 Mediante el uso de chi-cuadrado como estimador de la bondad del ajuste, se determina que la función de Weibull es la que mejor responde a la distribución diamétrica de nuestras parcelas. Posteriormente, se comprueba la bondad de dicho ajuste, mediante la prueba de Kolmogorov-Smirnov, el que determina que en el 99X de los casos, el ajuste es aceptable. Luego se procede a la recuperación de 1 os*parámetros de la función de Weibull; a, b y c. En esta etapa se estratifica la información y se realizan análisis de varianza (ANOVA) y pruebas de medias de TUKEY que dan como resultado el que se continúe trabajando por tratamiento y por inventario quedando la diferencia entre sitios absorbida por la altura dominante. Para la recuperación de los parámetros, se utilizan las variables de estado que definen a nuestras parcelas; edad, densidad, altura dominante, diámetro medio cuadrático y área basimétrica. El método seguido es la obtención de ecuaciones de regresión con las variables de estado como independientes para predecir los valores de los parámetros antes mencionados. Las ecuaciones se obtienen utilizando distintas vías (STEPWISE, ENTER y RSQUARE) lo que nos proporciona abundante material para efectuar la selección de éstas. La selección de las mejores ecuaciones se hace en base a dos estadísticos conocidos: - Coeficiente de determinación ajustado: R2 a - Cp de MALLOWS Estos elementos se tratan en forma conjunta considerando, además, que el número de parámetros esté en concordancia con el número de datos. El proceso de selección implica que se obtengan 36 ecuaciones de regresión que posteriormente se validan en dos sentidos. Por una parte, se calcula el valor de cada parámetro según la ecuación que le corresponda y se compara con el valor de los parámetros calculados en el ajuste de Weibull. Lo que se puede deducir de esta etapa es que la distorsión que sufren los parámetros al efectuar ecuaciones de regresión, es relativamente baja y la diferencia se produce debido a errores que se originan al realizar los modelos. Por otra parte, con las parcelas que se encuentren disponibles, se valida el modelo en el sentido de hacer el proceso inverso al anterior, es decir a partir de sus variables de estado, fácilmente medibles, se obtienen los valores de los parámetros que definen a la función de Weibull. Con esto se reconstruye la distribución diamétrica de cada función. Los resultados que se obtienen de esto indican que los ajustes son aceptables a un nivel del 1X en el caso de tres parcelas.
Resumo:
Natural regeneration in stone pine (Pinus pinea L.) managed forests in the Spanish Northern Plateau is not achieved successfully under current silviculture practices, constituting a main concern for forest managers. We modelled spatio-temporal features of primary dispersal to test whether (a) present low stand densities constrain natural regeneration success and (b) seed release is a climate-controlled process. The present study is based on data collected from a 6 years seed trap experiment considering different regeneration felling intensities. From a spatial perspective, we attempted alternate established kernels under different data distribution assumptions to fit a spatial model able to predict P. pinea seed rain. Due to P. pinea umbrella-like crown, models were adapted to account for crown effect through correction of distances between potential seed arrival locations and seed sources. In addition, individual tree fecundity was assessed independently from existing models, improving parameter estimation stability. Seed rain simulation enabled to calculate seed dispersal indexes for diverse silvicultural regeneration treatments. The selected spatial model of best fit (Weibull, Poisson assumption) predicted a highly clumped dispersal pattern that resulted in a proportion of gaps where no seed arrival is expected (dispersal limitation) between 0.25 and 0.30 for intermediate intensity regeneration fellings and over 0.50 for intense fellings. To describe the temporal pattern, the proportion of seeds released during monthly intervals was modelled as a function of climate variables – rainfall events – through a linear model that considered temporal autocorrelation, whereas cone opening took place over a temperature threshold. Our findings suggest the application of less intensive regeneration fellings, to be carried out after years of successful seedling establishment and, seasonally, subsequent to the main rainfall period (late fall). This schedule would avoid dispersal limitation and would allow for a complete seed release. These modifications in present silviculture practices would produce a more efficient seed shadow in managed stands.
Resumo:
Production of back contact solar cells requires holes generations on the wafers to keep both positive and negative contacts on the back side of the cell. This drilling process weakens the wafer mechanically due to the presence of the holes and the damage introduced during the process as microcracks. In this study, several chemical processes have been applied to drilled wafers in order to eliminate or reduce the damage generated during this fabrication step. The treatments analyzed are the followings: alkaline etching during 1, 3 and 5 minutes, acid etching for 2 and 4 minutes and texturisation. To determine mechanical strength of the samples a common mechanical study has been carried out testing the samples by the Ring on Ring bending test and obtaining the stress state in the moment of failure by FE simulation. Finally the results obtained for each treatment were fitted to a three parameter Weibull distribution