922 resultados para Statistical Tolerance Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Most large acute stroke trials have been neutral. Functional outcome is usually analysed using a yes or no answer, e.g. death or dependency vs. independence. We assessed which statistical approaches are most efficient in analysing outcomes from stroke trials. Methods: Individual patient data from acute, rehabilitation and stroke unit trials studying the effects of interventions which alter functional outcome were assessed. Outcomes included modified Rankin Scale, Barthel Index, and ‘3 questions’. Data were analysed using a variety of approaches which compare two treatment groups. The results for each statistical test for each trial were then compared. Results: Data from 55 datasets were obtained (47 trials, 54,173 patients). The test results differed substantially so that approaches which use the ordered nature of functional outcome data (ordinal logistic regression, t-test, robust ranks test, bootstrapping the difference in mean rank) were more efficient statistically than those which collapse the data into 2 groups (chi square) (ANOVA p<0.001). The findings were consistent across different types and sizes of trial and for the different measures of functional outcome. Conclusions: When analysing functional outcome from stroke trials, statistical tests which use the original ordered data are more efficient and more likely to yield reliable results. Suitable approaches included ordinal logistic regression, t-test, and robust ranks test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variety selection in perennial pasture crops involves identifying best varieties from data collected from multiple harvest times in field trials. For accurate selection, the statistical methods for analysing such data need to account for the spatial and temporal correlation typically present. This paper provides an approach for analysing multi-harvest data from variety selection trials in which there may be a large number of harvest times. Methods are presented for modelling the variety by harvest effects while accounting for the spatial and temporal correlation between observations. These methods provide an improvement in model fit compared to separate analyses for each harvest, and provide insight into variety by harvest interactions. The approach is illustrated using two traits from a lucerne variety selection trial. The proposed method provides variety predictions allowing for the natural sources of variation and correlation in multi-harvest data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Damage tolerance analysis is a quite new methodology based on prescribed inspections. The load spectra used to derive results of these analysis strongly influence the final defined inspections programs that for this reason must be as much as possible representative of load acting on the considered structural component and at the same time, obtained reducing both cost and time. The principal purpose of our work is in improving the actual condition developing a complete numerical Damage Tolerance analysis, able to prescribe inspection programs on typical aircraft critical components, respecting DT regulations, starting from much more specific load spectrum then those actually used today. In particular, these more specific load spectrum to design against fatigue have been obtained through an appositively derived flight simulator developed in a Matlab/Simulink environment. This dynamic model has been designed so that it can be used to simulate typical missions performing manually (joystick inputs) or completely automatic (reference trajectory need to be provided) flights. Once these flights have been simulated, model’s outputs are used to generate load spectrum that are then processed to get information (peaks, valleys) to perform statistical and/or comparison consideration with other load spectrum. However, also much more useful information (loads amplitude) have been extracted from these generated load spectrum to perform the previously mentioned predictions (Rainflow counting method is applied for this purpose). The entire developed methodology works in a complete automatic way, so that, once some specified input parameters have been introduced and different typical flights have been simulated both, manually or automatically, it is able to relate the effects of these simulated flights with the reduction of residual strength of the considered component.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The majority of Australian weeds are exotic plant species that were intentionally introduced for a variety of horticultural and agricultural purposes. A border weed risk assessment system (WRA) was implemented in 1997 in order to reduce the high economic costs and massive environmental damage associated with introducing serious weeds. We review the behaviour of this system with regard to eight years of data collected from the assessment of species proposed for importation or held within genetic resource centres in Australia. From a taxonomic perspective, species from the Chenopodiaceae and Poaceae were most likely to be rejected and those from the Arecaceae and Flacourtiaceae were most likely to be accepted. Dendrogram analysis and classification and regression tree (TREE) models were also used to analyse the data. The latter revealed that a small subset of the 35 variables assessed was highly associated with the outcome of the original assessment. The TREE model examining all of the data contained just five variables: unintentional human dispersal, congeneric weed, weed elsewhere, tolerates or benefits from mutilation, cultivation or fire, and reproduction by vegetative propagation. It gave the same outcome as the full WRA model for 71% of species. Weed elsewhere was not the first splitting variable in this model, indicating that the WRA has a capacity for capturing species that have no history of weediness. A reduced TREE model (in which human-mediated variables had been removed) contained four variables: broad climate suitability, reproduction in less or than equal to 1 year, self-fertilisation, and tolerates and benefits from mutilation, cultivation or fire. It yielded the same outcome as the full WRA model for 65% of species. Data inconsistencies and the relative importance of questions are discussed, with some recommendations made for improving the use of the system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To facilitate marketing and export, the Australian macadamia industry requires accurate crop forecasts. Each year, two levels of crop predictions are produced for this industry. The first is an overall longer-term forecast based on tree census data of growers in the Australian Macadamia Society (AMS). This data set currently accounts for around 70% of total production, and is supplemented by our best estimates of non-AMS orchards. Given these total tree numbers, average yields per tree are needed to complete the long-term forecasts. Yields from regional variety trials were initially used, but were found to be consistently higher than the average yields that growers were obtaining. Hence, a statistical model was developed using growers' historical yields, also taken from the AMS database. This model accounted for the effects of tree age, variety, year, region and tree spacing, and explained 65% of the total variation in the yield per tree data. The second level of crop prediction is an annual climate adjustment of these overall long-term estimates, taking into account the expected effects on production of the previous year's climate. This adjustment is based on relative historical yields, measured as the percentage deviance between expected and actual production. The dominant climatic variables are observed temperature, evaporation, solar radiation and modelled water stress. Initially, a number of alternate statistical models showed good agreement within the historical data, with jack-knife cross-validation R2 values of 96% or better. However, forecasts varied quite widely between these alternate models. Exploratory multivariate analyses and nearest-neighbour methods were used to investigate these differences. For 2001-2003, the overall forecasts were in the right direction (when compared with the long-term expected values), but were over-estimates. In 2004 the forecast was well under the observed production, and in 2005 the revised models produced a forecast within 5.1% of the actual production. Over the first five years of forecasting, the absolute deviance for the climate-adjustment models averaged 10.1%, just outside the targeted objective of 10%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emerging carbon economy will have a major impact on grazing businesses because of significant livestock methane and land-use change emissions. Livestock methane emissions alone account for similar to 11% of Australia's reported greenhouse gas emissions. Grazing businesses need to develop an understanding of their greenhouse gas impact and be able to assess the impact of alternative management options. This paper attempts to generate a greenhouse gas budget for two scenarios using a spread sheet model. The first scenario was based on one land-type '20-year-old brigalow regrowth' in the brigalow bioregion of southern-central Queensland. The 50 year analysis demonstrated the substantially different greenhouse gas outcomes and livestock carrying capacity for three alternative regrowth management options: retain regrowth (sequester 71.5 t carbon dioxide equivalents per hectare, CO2-e/ha), clear all regrowth (emit 42.8 t CO2-e/ha) and clear regrowth strips (emit 5.8 t CO2-e/ha). The second scenario was based on a 'remnant eucalypt savanna-woodland' land type in the Einasleigh Uplands bioregion of north Queensland. The four alternative vegetation management options were: retain current woodland structure (emit 7.4 t CO2-e/ha), allow woodland to thicken increasing tree basal area (sequester 20.7 t CO2-e/ha), thin trees less than 10 cm diameter (emit 8.9 t CO2-e/ha), and thin trees <20 cm diameter (emit 12.4 t CO2-e/ha). Significant assumptions were required to complete the budgets due to gaps in current knowledge on the response of woody vegetation, soil carbon and non-CO2 soil emissions to management options and land-type at the property scale. The analyses indicate that there is scope for grazing businesses to choose alternative management options to influence their greenhouse gas budget. However, a key assumption is that accumulation of carbon or avoidance of emissions somewhere on a grazing business (e.g. in woody vegetation or soil) will be recognised as an offset for emissions elsewhere in the business (e.g. livestock methane). This issue will be a challenge for livestock industries and policy makers to work through in the coming years.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is essential to provide experimental evidence and reliable predictions of the effects of water stress on crop production in the drier, less predictable environments. A field experiment undertaken in southeast Queensland, Australia with three water regimes (fully irrigated, rainfed and irrigated until late canopy expansion followed by rainfed) was used to compare effects of water stress on crop production in two maize (Zea mays L.) cultivars (Pioneer 34N43 and Pioneer 31H50). Water stress affected growth and yield more in Pioneer 34N43 than in Pioneer 31H50. A crop model APSIM-Maize, after having been calibrated for the two cultivars, was used to simulate maize growth and development under water stress. The predictions on leaf area index (LAI) dynamics, biomass growth and grain yield under rain fed and irrigated followed by rain fed treatments was reasonable, indicating that stress indices used by APSIM-Maize produced appropriate adjustments to crop growth and development in response to water stress. This study shows that Pioneer 31H50 is less sensitive to water stress and thus a preferred cultivar in dryland conditions, and that it is feasible to provide sound predictions and risk assessment for crop production in drier, more variable conditions using the APSIM-Maize model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context: Identifying susceptibility genes for schizophrenia may be complicated by phenotypic heterogeneity, with some evidence suggesting that phenotypic heterogeneity reflects genetic heterogeneity. Objective: To evaluate the heritability and conduct genetic linkage analyses of empirically derived, clinically homogeneous schizophrenia subtypes. Design: Latent class and linkage analysis. Setting: Taiwanese field research centers. Participants: The latent class analysis included 1236 Han Chinese individuals with DSM-IV schizophrenia. These individuals were members of a large affected-sibling-pair sample of schizophrenia (606 ascertained families), original linkage analyses of which detected a maximum logarithm of odds (LOD) of 1.8 (z = 2.88) on chromosome 10q22.3. Main Outcome Measures: Multipoint exponential LOD scores by latent class assignment and parametric heterogeneity LOD scores. Results: Latent class analyses identified 4 classes, with 2 demonstrating familial aggregation. The first (LC2) described a group with severe negative symptoms, disorganization, and pronounced functional impairment, resembling “deficit schizophrenia.” The second (LC3) described a group with minimal functional impairment, mild or absent negative symptoms, and low disorganization. Using the negative/deficit subtype, we detected genome-wide significant linkage to 1q23-25 (LOD = 3.78, empiric genome-wide P = .01). This region was not detected using the DSM-IV schizophrenia diagnosis, but has been strongly implicated in schizophrenia pathogenesis by previous linkage and association studies.Variants in the 1q region may specifically increase risk for a negative/deficit schizophrenia subtype. Alternatively, these results may reflect increased familiality/heritability of the negative class, the presence of multiple 1q schizophrenia risk genes, or a pleiotropic 1q risk locus or loci, with stronger genotype-phenotype correlation with negative/deficit symptoms. Using the second familial latent class, we identified nominally significant linkage to the original 10q peak region. Conclusion: Genetic analyses of heritable, homogeneous phenotypes may improve the power of linkage and association studies of schizophrenia and thus have relevance to the design and analysis of genome-wide association studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The statistical performance analysis of ESPRIT, root-MUSIC, minimum-norm methods for direction estimation, due to finite data perturbations, using the modified spatially smoothed covariance matrix, is developed. Expressions for the mean-squared error in the direction estimates are derived based on a common framework. Based on the analysis, the use of the modified smoothed covariance matrix improves the performance of the methods when the sources are fully correlated. Also, the performance is better even when the number of subarrays is large unlike in the case of the conventionally smoothed covariance matrix. However, the performance for uncorrelated sources deteriorates due to an artificial correlation introduced by the modified smoothing. The theoretical expressions are validated using extensive simulations. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With extensive use of dynamic voltage scaling (DVS) there is increasing need for voltage scalable models. Similarly, leakage being very sensitive to temperature motivates the need for a temperature scalable model as well. We characterize standard cell libraries for statistical leakage analysis based on models for transistor stacks. Modeling stacks has the advantage of using a single model across many gates there by reducing the number of models that need to be characterized. Our experiments on 15 different gates show that we needed only 23 models to predict the leakage across 126 input vector combinations. We investigate the use of neural networks for the combined PVT model, for the stacks, which can capture the effect of inter die, intra gate variations, supply voltage(0.6-1.2 V) and temperature (0 - 100degC) on leakage. Results show that neural network based stack models can predict the PDF of leakage current across supply voltage and temperature accurately with the average error in mean being less than 2% and that in standard deviation being less than 5% across a range of voltage, temperature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At medium to high frequencies the dynamic response of a built-up engineering system, such as an automobile, can be sensitive to small random manufacturing imperfections. Ideally the statistics of the system response in the presence of these uncertainties should be computed at the design stage, but in practice this is an extremely difficult task. In this paper a brief review of the methods available for the analysis of systems with uncertainty is presented, and attention is then focused on two particular "non- parametric" methods: statistical energy analysis (SEA), and the hybrid method. The main governing equations are presented, and a number of example applications are considered, ranging from academic benchmark studies to industrial design studies. © 2009 IOP Publishing Ltd.