985 resultados para WEIBULL-DISTRIBUTION
Resumo:
AIM: The purpose of this randomized split-mouth clinical trial was to determine the active tactile sensibility between single-tooth implants and opposing natural teeth and to compare it with the tactile sensibility of pairs of natural teeth on the contralateral side in the same mouth (intraindividual comparison). MATERIAL AND METHODS: The hypothesis was that the active tactile sensibilities of the implant side and control side are equivalent. Sixty two subjects (n=36 from Bonn, n=26 from Bern) with single-tooth implants (22 anterior and 40 posterior dental implants) were asked to bite on narrow copper foil strips varying in thickness (5-200 microm) and to decide whether or not they were able to identify a foreign body between their teeth. Active tactile sensibility was defined as the 50% threshold of correct answers estimated by means of the Weibull distribution. RESULTS: The results obtained for the interocclusal perception sensibility differed between subjects far more than they differed between natural teeth and implants in the same individual [implant/natural tooth: 16.7+/-11.3 microm (0.6-53.1 microm); natural tooth/natural tooth: 14.3+/-10.6 microm (0.5-68.2 microm)]. The intraindividual differences only amounted to a mean value of 2.4+/-9.4 microm (-15.1 to 27.5 microm). The result of our statistical calculations showed that the active tactile sensibility of single-tooth implants, both in the anterior and posterior region of the mouth, in combination with a natural opposing tooth is similar to that of pairs of opposing natural teeth (double t-test, equivalence margin: +/-8 microm, P<0.001, power >80%). Hence, the implants could be integrated in the stomatognathic control circuit.
Resumo:
The Acoustic emission (AE) technique, as one of non-intrusive and nondestructive evaluation techniques, acquires and analyzes the signals emitting from deformation or fracture of materials/structures under service loading. The AE technique has been successfully applied in damage detection in various materials such as metal, alloy, concrete, polymers and other composite materials. In this study, the AE technique was used for detecting crack behavior within concrete specimens under mechanical and environmental frost loadings. The instrumentations of the AE system used in this study include a low-frequency AE sensor, a computer-based data acquisition device and a preamplifier linking the AE sensor and the data acquisition device. The AE system purchased from Mistras Group was used in this study. The AE technique was applied to detect damage with the following laboratory tests: the pencil lead test, the mechanical three-point single-edge notched beam bending (SEB) test, and the freeze-thaw damage test. Firstly, the pencil lead test was conducted to verify the attenuation phenomenon of AE signals through concrete materials. The value of attenuation was also quantified. Also, the obtained signals indicated that this AE system was properly setup to detect damage in concrete. Secondly, the SEB test with lab-prepared concrete beam was conducted by employing Mechanical Testing System (MTS) and AE system. The cumulative AE events and the measured loading curves, which both used the crack-tip open displacement (CTOD) as the horizontal coordinate, were plotted. It was found that the detected AE events were qualitatively correlated with the global force-displacement behavior of the specimen. The Weibull distribution was vii proposed to quantitatively describe the rupture probability density function. The linear regression analysis was conducted to calibrate the Weibull distribution parameters with detected AE signals and to predict the rupture probability as a function of CTOD for the specimen. Finally, the controlled concrete freeze-thaw cyclic tests were designed and the AE technique was planned to investigate the internal frost damage process of concrete specimens.
Resumo:
Truncated distributions of the exponential family have great influence in the simulation models. This paper discusses the truncated Weibull distribution specifically. The truncation of the distribution is achieved by the Maximum Likelihood Estimation method or combined with the expectation and variance expressions. After the fitting of distribution, the goodness-of-fit tests (the Chi-Square test and the Kolmogorov-Smirnov test) are executed to rule out the rejected hypotheses. Finally the distributions are integrated in various simulation models, e. g. shipment consolidation model, to compare the influence of truncated and original versions of Weibull distribution on the model.
Resumo:
Serial correlation of extreme midlatitude cyclones observed at the storm track exits is explained by deviations from a Poisson process. To model these deviations, we apply fractional Poisson processes (FPPs) to extreme midlatitude cyclones, which are defined by the 850 hPa relative vorticity of the ERA interim reanalysis during boreal winter (DJF) and summer (JJA) seasons. Extremes are defined by a 99% quantile threshold in the grid-point time series. In general, FPPs are based on long-term memory and lead to non-exponential return time distributions. The return times are described by a Weibull distribution to approximate the Mittag–Leffler function in the FPPs. The Weibull shape parameter yields a dispersion parameter that agrees with results found for midlatitude cyclones. The memory of the FPP, which is determined by detrended fluctuation analysis, provides an independent estimate for the shape parameter. Thus, the analysis exhibits a concise framework of the deviation from Poisson statistics (by a dispersion parameter), non-exponential return times and memory (correlation) on the basis of a single parameter. The results have potential implications for the predictability of extreme cyclones.
Resumo:
This work presents a characterization of the surface wind climatology over the Iberian Peninsula (IP). For this objective, an unprecedented observational database has been developed. The database covers a period of 6years (2002–2007) and consists of hourly wind speed and wind direction data recorded at 514 automatic weather stations. Theoriginal observations underwent a quality control process to remove rough errors from the data set. In the first step, the annual and seasonal mean behaviour of the wind field are presented. This analysis shows the high spatial variability of the wind as a result of its interaction with the main orographic features of the IP. In order to simplify the characterization of the wind, a clustering procedure was applied to group the observational sites with similar temporal wind variability. A total of 20 regions are identified. These regions are strongly related to the main landforms of the IP. The wind behaviour of each region, characterized by the wind rose (WR), annual cycle (AC) and wind speed histogram, is explained as the response of each region to the main circulation types (CTs) affecting the IP. Results indicate that the seasonal variability of the synoptic scale is related with intra-annual variability and modulated by local features in the WRs variability. The wind speed distribution not always fit to a unimodal Weibull distribution consequence of interactions at different atmospheric scales. This work contributes to a deeper understanding of the temporal and spatial variability of surface winds. Taken together, the wind database created, the methodology used and the conclusion extracted are a benchmark for future works based on the wind behaviour.
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.
Resumo:
Quasi-monocrystalline silicon wafers have appeared as a critical innovation in the PV industry, joining the most favourable characteristics of the conventional substrates: the higher solar cell efficiencies of monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost and the full square-shape of the multicrystalline ones. However, the quasi-mono ingot growth can lead to a different defect structure than the typical Cz-Si process. Thus, the properties of the brand-new quasi-mono wafers, from a mechanical point of view, have been for the first time studied, comparing their strength with that of both Cz-Si mono and typical multicrystalline materials. The study has been carried out employing the four line bending test and simulating them by means of FE models. For the analysis, failure stresses were fitted to a three-parameter Weibull distribution. High mechanical strength was found in all the cases. The low quality quasi-mono wafers, interestingly, did not exhibit critical strength values for the PV industry, despite their noticeable density of extended defects.
Resumo:
Production of back contact solar cells requires holes generations on the wafers to keep both positive and negative contacts on the back side of the cell. This drilling process weakens the wafer mechanically due to the presence of the holes and the damage introduced during the process as microcracks. In this study, several chemical processes have been applied to drilled wafers in order to eliminate or reduce the damage generated during this fabrication step. The treatments analyzed are the followings: alkaline etching during 1, 3 and 5 minutes, acid etching for 2 and 4 minutes and texturisation. To determine mechanical strength of the samples a common mechanical study has been carried out testing the samples by the Ring on Ring bending test and obtaining the stress state in the moment of failure by FE simulation. Finally the results obtained for each treatment were fitted to a three parameter Weibull distribution
Resumo:
Este estudio profundiza en la estimación de variables forestales a partir de información LiDAR en el Valle de la Fuenfría (Cercedilla, Madrid). Para ello se dispone de dos vuelos realizados con sensor LiDAR en los años 2002 y 2011 y en el invierno de 2013 se ha realizado un inventario de 60 parcelas de campo. En primer lugar se han estimado seis variables dasométricas (volumen, área basimétrica, biomasa total, altura dominante, densidad y diámetro medio cuadrático) para 2013, tanto a nivel de píxel como a nivel de rodal y monte. Se construyeron modelos de regresión lineal múltiple que permitieron estimar con precisión dichas variables. En segundo lugar, se probaron diferentes métodos para la estimación de la distribución diamétrica. Por un lado, el método de predicción de percentiles y, por otro lado, el método de predicción de parámetros. Este segundo método se probó para una función Weibull simple, una función Weibull doble y una combinación de ambas según la distribución que mejor se ajustaba a cada parcela. Sin embargo, ninguno de los métodos ha resultado suficientemente válido para predecir la distribución diamétrica. Por último se estimaron el crecimiento en volumen y área basimétrica a partir de la comparación de los vuelos del 2002 y 2011. A pesar de que la tecnología LiDAR era diferente y solo se disponía de un inventario completo, realizado en 2013, los modelos construidos presentan buenas bondades de ajuste. Asimismo, el crecimiento a nivel de pixel se ha mostrado estar relacionado de forma estadísticamente significativa con la pendiente, orientación y altitud media del píxel. ABSTRACT This project goes in depth on the estimation of forest attributes by means of LiDAR data in Fuenfria’s Valley (Cercedilla, Madrid). The available information was two LiDAR flights (2002 and 2011) and a forest inventory consisting of 60 plots (2013). First, six different dasometric attributes (volume, basal area, total aboveground biomass, top height, density and quadratic mean diameter) were estimated in 2013 both at a pixel, stand and forest level. The models were developed using multiple linear regression and were good enough to predict these attributes with great accuracy. Second, the measured diameter distribution at each plot was fitted to a simple and a double Weibull distribution and different methods for its estimation were tested. Neither parameter prediction method nor percentile prediction method were able to account for the diameter distribution. Finally, volume and top height growths were estimated comparing 2011 LiDAR flight with 2002 LiDAR flight. Even though the LiDAR technology was not the same and there was just one forest inventory with sample plots, the models properly explain the growth. Besides, growth at each pixel is significantly related to its average slope, orientation and altitude.
Resumo:
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.
Resumo:
A floresta Amazônica possui um papel ambiental, social e econômico importante para a região, para o país e para o mundo. Dessa forma, técnicas de exploração que visam a diminuição dos impactos causados à floresta são essenciais. Com isso, o objetivo dessa tese é comparar a Exploração de Impacto Reduzido com a Exploração Convencional na Amazônia brasileira através de modelos empíricos de árvore individual de crescimento e produção. O experimento foi instalado na fazenda Agrossete, localizada em Paragominas - PA. Em 1993, três áreas dessa fazenda foram selecionadas para exploração. Na primeira área, 105 hectares foram explorados através da Exploração de Impacto Reduzido. Na segunda área, 75 hectares foram submetidos à Exploração Convencional. E, por fim, a terceira área foi mantida como área testemunha. A coleta de dados de diâmetro à altura do peito e a identificação das espécies dentro de uma parcela de 24,5 hectares, instalada aleatoriamente em cada área, foi realizada nos anos de 1993 (antes da colheita), 1994 (seis meses depois da colheita), 1995, 1996, 1998, 2000, 2003, 2006 e 2009. Dessa forma, as três áreas foram comparadas através do ajuste de um modelo de incremento diamétrico, considerando que efeito estocástico podia assumir outras quatro distribuições além da distribuição normal, de um modelo de probabilidade de mortalidade e de um modelo de probabilidade de recrutamento. O comportamento do incremento diamétrico indicou que as áreas que foram submetidas a exploração possuem o mesmo comportamento em quase todos os grupos de espécies, com exceção do grupo de espécies intermediárias. Os indivíduos que são submetidos a exploração possuem um maior crescimento em diâmetros quando comparados com área que não sofreu exploração. Além disso, assumir o efeito estocástico com distribuição Weibull melhorou o ajuste dos modelos. Em relação à probabilidade de mortalidade, novamente as áreas que sofreram exploração possuem comportamento semelhante quanto à mortalidade, mas diferente da área que não foi explorada, sendo que os indivíduos localizados nas áreas exploradas possuem uma maior probabilidade de morte em relação aos presentes na área não explorada. Os modelos de probabilidade de recrutamento indicaram diferença apenas entre as áreas exploradas e a área controle. Sendo que, as áreas exploradas apresentaram uma maior taxa de recrumento em comparação a área não explorada. Portanto, o comportamento individual das árvores após a exploração é o mesmo na Exploração Convencional e na Exploração de Impacto Reduzido.
Resumo:
Modeling of self-similar traffic is performed for the queuing system of G/M/1/K type using Weibull distribution. To study the self-similar traffic the simulation model is developed by using SIMULINK software package in MATLAB environment. Approximation of self-similar traffic on the basis of spline functions. Modeling self-similar traffic is carried outfor QS of W/M/1/K type using the Weibull distribution. Initial data are: the value of Hurst parameter H=0,65, the shape parameter of the distribution curve α≈0,7 and distribution parameter β≈0,0099. Considering that the self-similar traffic is characterized by the presence of "splashes" and long-termdependence between the moments of requests arrival in this study under given initial data it is reasonable to use linear interpolation splines.
Resumo:
On the basis of aerial photographs of sea ice floes in the marginal ice zone (MIZ) of Prydz Bay acquired from December 2004 to February 2005 during the 21st Chinese National Antarctic Research Expedition, image processing techniques are employed to extract some geometric parameters of floes from two merged transects covering the whole MIZ. Variations of these parameters with the distance into the MIZ are then obtained. Different parameters of floe size, namely area, perimeter, and mean caliper diameter (MCD), follow three similar stages of increasing, flat and increasing again, with distance from the open ocean. Floe shape parameters (roundness and the ratio of perimeter to MCD), however, have less significant variations than that of floe size. Then, to modify the deviation of the cumulative floe size distribution from the ideal power law, an upper truncated power-law function and a Weibull function are used, and four calculated parameters of the above functions are found to be important descriptors of the evolution of floe size distribution in the MIZ. Among them, Lr of the upper truncated power-law function indicates the upper limit of floe size and roughly equals the maximum floe size in each square sample area. L0 in the Weibull distribution shows an increasing proportion of larger floes in squares farther from the open ocean and roughly equals the mean floe size. D in the upper truncated power-law function is closely associated with the degree of confinement during ice breakup. Its decrease with the distance into MIZ indicates the weakening of confinement conditions on floes owing to wave attenuation. The gamma of the Weibull distribution characterizes the degree of homogeneity in a data set. It also decreases with distance into MIZ, implying that floe size distributes increase in range. Finally, a statistical test on floe size is performed to divide the whole MIZ into three distinct zones made up of floes of quite different characteristics. This zonal structure of floe size also agrees well with the trends of floe shape and floe size distribution, and is believed to be a straightforward result of wave-ice interaction in the MIZ.
Resumo:
We consider the problem of estimating P(Yi + (...) + Y-n > x) by importance sampling when the Yi are i.i.d. and heavy-tailed. The idea is to exploit the cross-entropy method as a toot for choosing good parameters in the importance sampling distribution; in doing so, we use the asymptotic description that given P(Y-1 + (...) + Y-n > x), n - 1 of the Yi have distribution F and one the conditional distribution of Y given Y > x. We show in some specific parametric examples (Pareto and Weibull) how this leads to precise answers which, as demonstrated numerically, are close to being variance minimal within the parametric class under consideration. Related problems for M/G/l and GI/G/l queues are also discussed.