917 resultados para Statistical Robustness
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Using life strategies to explore the vulnerability of ecosystem services to invasion by alien plants
Resumo:
Invasive plants can have different effects of ecosystem functioning and on the provision of ecosystem services, from strongly deleterious impacts to positive effects. The nature and intensity of such effects will depend on the service and ecosystem being considered, but also on features of life strategies of invaders that influence their invasiveness as well as their influence of key processes of receiving ecosystems. To address the combined effect of these various factors we developed a robust and efficient methodological framework that allows to identify areas of possible conflict between ecosystem services and alien invasive plants, considering interactions between landscape invasibility and species invasiveness. Our framework combines the statistical robustness of multi-model inference, efficient techniques to map ecosystem services, and life strategies as a functional link between invasion, functional changes and potential provision of services by invaded ecosystems. The framework was applied to a test region in Portugal, for which we could successfully predict the current patterns of plant invasion, of ecosystem service provision, and finally of probable conflict (expressing concern for negative impacts, and value for positive impacts on services) between alien species richness (total and per plant life strategy) and the potential provision of selected services. Potential conflicts were identified for all combinations of plant strategy and ecosystem service, with an emphasis for those concerning conflicts with carbon sequestration, water regulation and wood production. Lower levels of conflict were obtained between invasive plant strategies and the habitat for biodiversity supporting service. The added value of the proposed framework in the context of landscape management and planning is discussed in perspective of anticipation of conflicts, mitigation of negative impacts, and potentiation of positive effects of plant invasions on ecosystems and their services.
Resumo:
As unidades de beneficiamento de minério de ouro buscam cada vez mais uma produção de baixo custo e maximização dos ganhos financeiros. A caracterização tecnológica está inserida em uma abordagem multidisciplinar que permite agregar conhecimento, alternativas de otimização e redução nos custos de operação. Inserida como uma ferramenta na caracterização tecnológica, a análise de imagens automatizada tem importante papel no setor mineral principalmente pela rapidez das análises, robustez estatística e confiabilidade nos resultados. A técnica pode ser realizada por meio de imagens adquiridas em microscópio eletrônico de varredura, associada a microanálises químicas sendo utilizada em diversas etapas de um empreendimento mineiro. Este estudo tem como objetivo a caraterização tecnológica de minério de ouro da Mina Morro do Ouro, Minas Gerais na qual foi utilizado a técnica de análise de imagens automatizada por MLA em um conjunto de 88 amostras. Foi possível identificar que 90% do ouro está na fração acima de 0,020 mm; o quartzo e mica representam cerca de 80% da massa total do minério; os sulfetos apresentam diâmetro de círculo equivalente entre 80 e 100 ?m e são representados por pirita e arsenopirita, com pirrotita, calcopirita, esfalerita e galena subordinada. Também foi possível observar que o ouro está majoritariamente associado à pirita e arsenopirita e com o aumento de teor de arsênio, cresce a parcela de ouro associado à arsenopirita. As medianas das distribuições de tamanho dos grãos de ouro apresentam um valor médio de 19 ?m. Verificou-se que a composição dos grãos de ouro é bastante diversa, em média 77% de ouro e 23% de prata. Para material abaixo de 0,50 mm observa-se uma parcela expressiva de perímetro exposto dos grãos de ouro (média 73%); o ouro incluso (21% do total dos grãos de ouro) está associado a pirita e arsenopirita, sendo que em 14 das 88 amostras este valor pode superar 40% do total de ouro contido. A ferramenta da análise de imagens automatizada mostrou-se bastante eficiente definindo características particulares o que fornece de forma objetiva subsídios para os trabalhos de planejamento de mina e processamento mineral.
Resumo:
* The work is supported by RFBR, grant 04-01-00858-a
Resumo:
* The work is supported by RFBR, grant 04-01-00858-a
Resumo:
* Работа выполнена при поддержке РФФИ, гранты 07-01-00331-a и 08-01-00944-a
Resumo:
PURPOSE: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. METHODS AND MATERIALS: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3D statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. RESULTS: Cross-validation revealed a dice similarity of 95% ± 2% for the sclera and cornea and 91% ± 2% for the lens. Overall, mean segmentation error was found to be 0.3 ± 0.1 mm. Average segmentation time was 14 ± 2 s on a standard personal computer. CONCLUSIONS: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.
Resumo:
BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.
Resumo:
We address the problem of automatically identifying and restoring damaged and contaminated images. We suggest a novel approach based on a semi-parametric model. This has two components, a parametric component describing known physical characteristics and a more flexible non-parametric component. The latter avoids the need for a detailed model for the sensor, which is often costly to produce and lacking in robustness. We assess our approach using an analysis of electroencephalographic images contaminated by eye-blink artefacts and highly damaged photographs contaminated by non-uniform lighting. These experiments show that our approach provides an effective solution to problems of this type.
Resumo:
Predators and preys often form species networks with asymmetric patterns of interaction. We study the dynamics of a four species network consisting of two weakly connected predator-prey pairs. We focus our analysis on the effects of the cross interaction between the predator of the first pair and the prey of the second pair. This is an example where the predator overlap, which is the proportion of predators that a given prey shares with other preys, is not uniform across the network due to asymmetries in patterns of interaction. We explore the behavior of the system under different interaction strengths and study the dynamics of survival and extinction. In particular, we consider situations in which the four species have initial populations lower than their long-term equilibrium, simulating catastrophic situations in which their abundances are reduced due to human action or environmental change. We show that, under these reduced initial conditions, and depending on the strength of the cross interaction, the populations tend to oscillate before re-equilibrating, disturbing the community equilibrium and sometimes reaching values that are only a small fraction of the equilibrium population, potentially leading to their extinction. We predict that, contrary to one`s intuition, the most likely scenario is the extinction of the less predated preys. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.