952 resultados para Multivariate statistical method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows the analysis results obtained from more than 200 finite element method (FEM) models used to calculate the settlement of a foundation resting on two soils of differing deformability. The analysis considers such different parameters as the foundation geometry, the percentage of each soil in contact with the foundation base and the ratio of the soils’ elastic moduli. From the described analysis, it is concluded that the maximum settlement of the foundation, calculated by assuming that the foundation is completely resting on the most deformable soil, can be correlated with the settlement calculated by FEM models through a correction coefficient named “settlement reduction factor” (α). As a consequence, a novel expression is proposed for calculating the real settlement of a foundation resting on two soils of different deformability with maximum errors lower than 1.57%, as demonstrated by the statistical analysis carried out. A guide for the application of the proposed simple method is also explained in the paper. Finally, the proposed methodology has been validated using settlement data from an instrumented foundation, indicating that this is a simple, reliable and quick method which allows the computation of the maximum elastic settlement of a raft foundation, evaluates its suitability and optimises its selection process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questions of "viability" evaluation of innovation projects are considered in this article. As a method of evaluation Hidden Markov Models are used. Problem of determining model parameters, which reproduce test data with highest accuracy are solving. For training the model statistical data on the implementation of innovative projects are used. Baum-Welch algorithm is used as a training algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relative paleointensity (RPI) method assumes that the intensity of post depositional remanent magnetization (PDRM) depends exclusively on the magnetic field strength and the concentration of the magnetic carriers. Sedimentary remanence is regarded as an equilibrium state between aligning geomagnetic and randomizing interparticle forces. Just how strong these mechanical and electrostatic forces are, depends on many petrophysical factors related to mineralogy, particle size and shape of the matrix constituents. We therefore test the hypothesis that variations in sediment lithology modulate RPI records. For 90 selected Late Quaternary sediment samples from the subtropical and subantarctic South Atlantic Ocean a combined paleomagnetic and sedimentological dataset was established. Misleading alterations of the magnetic mineral fraction were detected by a routine Fe/kappa test (Funk, J., von Dobeneck, T., Reitz, A., 2004. Integrated rock magnetic and geochemical quantification of redoxomorphic iron mineral diagenesis in Late Quaternary sediments from the Equatorial Atlantic. In: Wefer, G., Mulitza, S., Ratmeyer, V. (Eds.), The South Atlantic in the Late Quaternary: reconstruction of material budgets and current systems. Springer-Verlag, Berlin/Heidelberg/New York/Tokyo, pp. 239-262). Samples with any indication of suboxic magnetite dissolution were excluded from the dataset. The parameters under study include carbonate, opal and terrigenous content, grain size distribution and clay mineral composition. Their bi- and multivariate correlations with the RPI signal were statistically investigated using standard techniques and criteria. While several of the parameters did not yield significant results, clay grain size and chlorite correlate weakly and opal, illite and kaolinite correlate moderately to the NRM/ARM signal used here as a RPI measure. The most influential single sedimentological factor is the kaolinite/illite ratio with a Pearson's coefficient of 0.51 and 99.9% significance. A three-member regression model suggests that matrix effects can make up over 50% of the observed RPI dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important aspect in manufacturing design is the distribution of geometrical tolerances so that an assembly functions with given probability, while minimising the manufacturing cost. This requires a complex search over a multidimensional domain, much of which leads to infeasible solutions and which can have many local minima. As well, Monte-Carlo methods are often required to determine the probability that the assembly functions as designed. This paper describes a genetic algorithm for carrying out this search and successfully applies it to two specific mechanical designs, enabling comparisons of a new statistical tolerancing design method with existing methods. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard variance components method for mapping quantitative trait loci is derived on the assumption of normality. Unsurprisingly, statistical tests based on this method do not perform so well if this assumption is not satisfied. We use the statistical concept of copulas to relax the assumption of normality and derive a test that can perform well under any distribution of the continuous trait. In particular, we discuss bivariate normal copulas in the context of sib-pair studies. Our approach is illustrated by a linkage analysis of lipoprotein(a) levels, whose distribution is highly skewed. We demonstrate that the asymptotic critical levels of the test can still be calculated using the interval mapping approach. The new method can be extended to more general pedigrees and multivariate phenotypes in a similar way as the original variance components method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to.. the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the clinical features, treatment, and outcomes of a cohort of patients with ocular adnexal lymphoproliferative disease classified according to the World Health Organization modification of the Revised European-American Classification of Lymphoid neoplasms and to perform a robust statistical analysis of these data. Methods: Sixty-nine cases of ocular adnexal lymphoproliferative disease, seen in a tertiary referral center from 1992 to 2003, were included in the study. Lesions were classified by using the World Health Organization modification of the Revised European-American Classification of Lymphoid neoplasms classification. Outcome variables included disease-specific Survival, relapse-free survival, local control, and distant control. Results: Stage IV disease at presentation, aggressive lymphoma histology, the presence of prior or concurrent systemic lymphoma at presentation, and bilateral adnexal disease were significant predictors for reduced disease-specific survival, local control, and distant control. Multivariate analysis found that aggressive histology and bilateral adnexal disease had significantly reduced disease-specific Survival. Conclusions: The typical presentation of adnexal lymphoproliferative disease is with a painless mass, swelling, or proptosis; however, pain and inflammation occurred in 20% and 30% of patients, respectively. Stage at presentation, tumor histology, primary or secondary status, and whether the process was unilateral or bilateral were significant variables for disease outcome. In this study, distant spread of lymphoma was lower in patients who received greater than 20 Gy of orbital radiotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.