374 resultados para Weibull-jakauma
Resumo:
Las evaluaciones genéticas para caracteres funcionales requieren metodologías como el análisis de supervivencia (SA), que contemplen registros pertenecientes a hembras que no han presentado el evento de interes (descarte o prenez)al momento de la evaluacion. Así, los objetivos de esta tesis fueron: 1)analizar los factores que afectan el riesgo de descarte en vida productiva (VP), y de concepción en días abiertos (DA)e intervalo primer-último servicio (IP), 2)estimar parámetros genéticos de dispersión para los caracteres evaluados. Se emplearon 44652 registros de lactancia de 15706 vacas Holstein Colombiano, ajustando un modelo animal frágil Weibull para VP, con el número de partos (NP), producción de leche (PL)y edad al primer parto (EPP)como efectos fijos. En el caso de DA e IPU se emplearon 14789 servicios de 6205 vacas, ajustando un modelo de datos agrupados con efectos fijos de NP, EPP y número de servicios (SC). En todos los casos se incluyeron efectos aleatorios de animal y de hato. Tanto PL como NP tuvieron gran influencia en la VP, al igual que NP en DA e IPU. Se estimaron valores de h2 de 0,104, 0,086 y 0,1013 para VP, DA, e IPU, respectivamente, asi como correlaciones genéticas simples (rs)entre el nivel de PL con VP y DA de ƒ{0,03 y 0,47, respectivamente. Teniendo en cuenta tanto los valores de cría expresados como riesgo relativo, asi como la magnitud de rs, un aumento en PL conllevaría a disminución en el riesgo de descarte y aumento en el riesgo de concepción, dando lugar a aumentos en la VP y reducción en DA. Las h2s estimadas sugieren la existencia de variabilidad genetica para VP, DA e IPU en Holstein Colombiano, sirviendo de sustento para la implementación de evaluaciones genética de la raza, aprovechando las ventajas de metodologías como SA.
Resumo:
Johnson's SB distribution is a four-parameter distribution that is transformed into a normal distribution by a logit transformation. By replacing the normal distribution of Johnson's SB with the logistic distribution, we obtain a new distributional model that approximates SB. It is analytically tractable, and we name it the "logitlogistic" (LL) distribution. A generalized four-parameter Weibull model and the Burr XII model are also introduced for comparison purposes. Using the distribution "shape plane" (with axes skew and kurtosis) we compare the "coverage" properties of the LL, the generalized Weibull, and the Burr XII with Johnson's SB, the beta, and the three-parameter Weibull, the main distributions used in forest modelling. The LL is found to have the largest range of shapes. An empirical case study of the distributional models is conducted on 107 sample plots of Chinese fir. The LL performs best among the four-parameter models.
Resumo:
Large waves pose risks to ships, offshore structures, coastal infrastructure and ecosystems. This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. During the period 2000 to 2009, surface elevation was recorded every 0.59 s during sampling periods of 30 min. The Hmax observations scale linearly with Hs on average. A widely-used empirical Weibull distribution is found to estimate average values of Hmax/H s and Hmax better than a Rayleigh distribution, but tends to underestimate both for all but the smallest waves. In this paper we propose a modified Rayleigh distribution which compensates for the heterogeneity of the observed dataset: the distribution is fitted to the whole dataset and improves the estimate of the largest waves. Over the 10-year period, the Weibull distribution approximates the observed Hs and Hmax well, and an exponential function can be used to predict the probability distribution function of the ratio Hmax/Hs. However, the Weibull distribution tends to underestimate the occurrence of extremely large values of Hs and Hmax. The persistence of Hs and Hmax in winter is also examined. Wave fields with Hs > 12 m and Hmax > 16 m do not last longer than 3 h. Low-to-moderate wave heights that persist for more than 12 h dominate the relationship of the wave field with the winter NAO index over 2000–2009. In contrast, the inter-annual variability of wave fields with Hs > 5.5 m or Hmax > 8.5 m and wave fields persisting over ~2.5 days is not associated with the winter NAO index.
Resumo:
It is shown that, when expressing arguments in terms of their logarithms, the Laplace transform of a function is related to the antiderivative of this function by a simple convolution. This allows efficient numerical computations of moment generating functions of positive random variables and their inversion. The application of the method is straightforward, apart from the necessity to implement it using high-precision arithmetics. In numerical examples the approach is demonstrated to be particularly useful for distributions with heavy tails, Such as lognormal, Weibull, or Pareto distributions, which are otherwise difficult to handle. The computational efficiency compared to other methods is demonstrated for an M/G/1 queueing problem.
Resumo:
We develop a recursion-relation approach for calculating the failure probabilities of a fiber bundle with local load sharing. This recursion relation is exact, so it provides a way to test the validity of the various approximate methods. Applying the exact calculation to uniform and Weibull threshold distributions, we find that the most probable failure load coincides with the average strength as the size of the system N --> infinity.
Resumo:
Metallographic characterisation is combined with statistical analysis to study the microstructure of a BT16 titanium alloy after different heat treatment processes. It was found that the length, width and aspect ratio of α plates in this alloy follow the three-parameter Weibull distribution. Increasing annealing temperature or time causes the probability distribution of the length and the width of α plates to tend toward a normal distribution. The phase transformation temperature of the BT16 titanium alloy was found to be 875±5°C.
Resumo:
A nonparametric, small-sample-size test for the homogeneity of two psychometric functions against the left- and right-shift alternatives has been developed. The test is designed to determine whether it is safe to amalgamate psychometric functions obtained in different experimental sessions. The sum of the lower and upper p-values of the exact (conditional) Fisher test for several 2 × 2 contingency tables (one for each point of the psychometric function) is employed as the test statistic. The probability distribution of the statistic under the null (homogeneity) hypothesis is evaluated to obtain corresponding p-values. Power functions of the test have been computed by randomly generating samples from Weibull psychometric functions. The test is free of any assumptions about the shape of the psychometric function; it requires only that all observations are statistically independent. © 2011 Psychonomic Society, Inc.
Resumo:
A numerical method is developed to simulate complex two-dimensional crack propagation in quasi-brittle materials considering random heterogeneous fracture properties. Potential cracks are represented by pre-inserted cohesive elements with tension and shear softening constitutive laws modelled by spatially varying Weibull random fields. Monte Carlo simulations of a concrete specimen under uni-axial tension were carried out with extensive investigation of the effects of important numerical algorithms and material properties on numerical efficiency and stability, crack propagation processes and load-carrying capacities. It was found that the homogeneous model led to incorrect crack patterns and load–displacement curves with strong mesh-dependence, whereas the heterogeneous model predicted realistic, complicated fracture processes and load-carrying capacity of little mesh-dependence. Increasing the variance of the tensile strength random fields with increased heterogeneity led to reduction in the mean peak load and increase in the standard deviation. The developed method provides a simple but effective tool for assessment of structural reliability and calculation of characteristic material strength for structural design.
Resumo:
This paper elaborates on the ergodic capacity of fixed-gain amplify-and-forward (AF) dual-hop systems, which have recently attracted considerable research and industry interest. In particular, two novel capacity bounds that allow for fast and efficient computation and apply for nonidentically distributed hops are derived. More importantly, they are generic since they apply to a wide range of popular fading channel models. Specifically, the proposed upper bound applies to Nakagami-m, Weibull, and generalized-K fading channels, whereas the proposed lower bound is more general and applies to Rician fading channels. Moreover, it is explicitly demonstrated that the proposed lower and upper bounds become asymptotically exact in the high signal-to-noise ratio (SNR) regime. Based on our analytical expressions and numerical results, we gain valuable insights into the impact of model parameters on the capacity of fixed-gain AF dual-hop relaying systems. © 2011 IEEE.
Resumo:
This study presents the findings of an empirical channel characterisation for an ultra-wideband off-body optic fibre-fed multiple-antenna array within an office and corridor environment. The results show that for received power experiments, the office and corridor were best modelled by lognormal and Rician distributions, respectively [for both line of sight (LOS) and non-LOS (NLOS) scenarios]. In the office, LOS measurements for t and tRMS were both described by the Normal distribution for all channels, whereas NLOS measurements for t and t were Nakagami and Weibull distributed, respectively. For the corridor measurements, LOS for t and t were either Nakagami or normally distributed for all channels, with NLOS measurements for t and t being Nakagami and normally distributed, respectively. This work also shows that achievable diversity gain was influenced by both mutual coupling and cross-correlation co-efficients. Although the best diversity gains were 1.8 dB for three-channel selective diversity combining, the authors present recommendations for improving these results. © The Institution of Engineering and Technology 2013.
Resumo:
Hip replacement surgery is amongst the most common orthopaedic operations performed in the UK. Aseptic loosening is responsible for 40% of hip revision procedures. Aseptic loosening is a result of cement mantle fatigue. The aim of the current study is to analyse the effect of nanoscale Graphene Oxide (GO) on the mechanical properties of orthopaedic bone cement. Study Design A experimental thermal and mechanical analysis was conducted in a laboratory set up conforming to international standards for bone cement testing according to ISO 5583. Testing was performed on control cement samples of Colacryl bone cement, and additional samples reinforced with variable wt% of Graphene Oxide containing composites – 0.1%, 0.25%, 0.5% and 1.0% GO loading. Pilot Data Porosity demonstrated a linear relationship with increasing wt% loading compared to control (p<0.001). Thermal characterisation demonstrated maximal temperature during polymerization, and generated exotherm were inversely proportional to w%t loading (p<0.05) Fatigue strength performed on the control and 0.1 and 0.25%wt loadings of GO demonstrate increased average cycles to failure compared to control specimens. A right shift of the Weibull curve was demonstrated for both wt% available currently. Logistic regression analysis for failure demonstrated significant increases in number of cycles to failure for both specimens compared to a control (p<0.001). Forward Plan Early results convey positive benefits at low wt% loadings of GO containing bone cement. Study completion and further analysis is required in order to elude to the optimum w%t of GO which conveys the greatest mechanical advantage.
Resumo:
Tese de doutoramento, Estatística e Investigação Operacional (Probabilidades e Estatística), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Thesis (Ph.D.)--University of Washington, 2013
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.