950 resultados para Statistical Model


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper reviews the application of statistical models to planning and evaluating cancer screening programmes. Models used to analyse screening strategies can be classified as either surface models, which consider only those events which can be directly observed such as disease incidence, prevalence or mortality, or deep models, which incorporate hypotheses about the disease process that generates the observed events. This paper focuses on the latter type. These can be further classified as analytic models, which use a model of the disease to derive direct estimates of characteristics of the screening procedure and its consequent benefits, and simulation models, which use the disease model to simulate the course of the disease in a hypothetical population with and without screening and derive measures of the benefit of screening from the simulation outcomes. The main approaches to each type of model are described and an overview given of their historical development and strengths and weaknesses. A brief review of fitting and validating such models is given and finally a discussion of the current state of, and likely future trends in, cancer screening models is presented.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Developing water quality guidelines for Antarctic marine environments requires understanding the sensitivity of local biota to contaminant exposure. Antarctic invertebrates have shown slower contaminant responses in previous experiments compared to temperate and tropical species in standard toxicity tests. Consequently, test methods which take into account environmental conditions and biological characteristics of cold climate species need to be developed. This study investigated the effects of five metals on the survival of a common Antarctic amphipod, Orchomenella pinguides. Multiple observations assessing mortality to metal exposure were made over the 30 days exposure period. Traditional toxicity tests with quantal data sets are analysed using methods such as maximum likelihood regression (probit analysis) and Spearman–Kärber which treat individual time period endpoints independently. A new statistical model was developed to integrate the time-series concentration–response data obtained in this study. Grouped survival data were modelled using a generalized additive mixed model (GAMM) which incorporates all the data obtained from multiple observation times to derive time integrated point estimates. The sensitivity of the amphipod, O. pinguides, to metals increased with increasing exposure time. Response times varied for different metals with amphipods responding faster to copper than to cadmium, lead or zinc. As indicated by 30 days lethal concentration (LC50) estimates, copper was the most toxic metal (31 µg/L), followed by cadmium (168 µg/L), lead (256 µg/L) and zinc (822 µg/L). Nickel exposure (up to 1.12 mg/L) did not affect amphipod survival. Using longer exposure durations and utilising the GAMM model provides an improved methodology for assessing sensitivities of slow responding Antarctic marine invertebrates to contaminants.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Statistical time series methods have proven to be a promising technique in structural health monitoring, since it provides a direct form of data analysis and eliminates the requirement for domain transformation. Latest research in structural health monitoring presents a number of statistical models that have been successfully used to construct quantified models of vibration response signals. Although a majority of these studies present viable results, the aspects of practical implementation, statistical model construction and decision-making procedures are often vaguely defined or omitted from presented work. In this article, a comprehensive methodology is developed, which essentially utilizes an auto-regressive moving average with exogenous input model to create quantified model estimates of experimentally acquired response signals. An iterative self-fitting algorithm is proposed to construct and fit the auto-regressive moving average with exogenous input model, which is capable of integrally finding an optimum set of auto-regressive moving average with exogenous input model parameters. After creating a dataset of quantified response signals, an unlabelled response signal can be identified according to a 'closest-fit' available in the dataset. A unique averaging method is proposed and implemented for multi-sensor data fusion to decrease the margin of error with sensors, thus increasing the reliability of global damage identification. To demonstrate the effectiveness of the developed methodology, a steel frame structure subjected to various bolt-connection damage scenarios is tested. Damage identification results from the experimental study suggest that the proposed methodology can be employed as an efficient and functional damage identification tool. © The Author(s) 2014.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The US term structure of interest rates plays a central role in fixed-income analysis. For example, estimating accurately the US term structure is a crucial step for those interested in analyzing Brazilian Brady bonds such as IDUs, DCBs, FLIRBs, EIs, etc. In this work we present a statistical model to estimate the US term structure of interest rates. We address in this report all major issues which drove us in the process of implementing the model developed, concentrating on important practical issues such as computational efficiency, robustness of the final implementation, the statistical properties of the final model, etc. Numerical examples are provided in order to illustrate the use of the model on a daily basis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Atypical points in the data may result in meaningless e±cient frontiers. This follows since portfolios constructed using classical estimates may re°ect neither the usual nor the unusual days patterns. On the other hand, portfolios constructed using robust approaches are able to capture just the dynamics of the usual days, which constitute the majority of the business days. In this paper we propose an statistical model and a robust estimation procedure to obtain an e±cient frontier which would take into account the behavior of both the usual and most of the atypical days. We show, using real data and simulations, that portfolios constructed in this way require less frequent rebalancing, and may yield higher expected returns for any risk level.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

O objetivo deste estudo é propor a implementação de um modelo estatístico para cálculo da volatilidade, não difundido na literatura brasileira, o modelo de escala local (LSM), apresentando suas vantagens e desvantagens em relação aos modelos habitualmente utilizados para mensuração de risco. Para estimação dos parâmetros serão usadas as cotações diárias do Ibovespa, no período de janeiro de 2009 a dezembro de 2014, e para a aferição da acurácia empírica dos modelos serão realizados testes fora da amostra, comparando os VaR obtidos para o período de janeiro a dezembro de 2014. Foram introduzidas variáveis explicativas na tentativa de aprimorar os modelos e optou-se pelo correspondente americano do Ibovespa, o índice Dow Jones, por ter apresentado propriedades como: alta correlação, causalidade no sentido de Granger, e razão de log-verossimilhança significativa. Uma das inovações do modelo de escala local é não utilizar diretamente a variância, mas sim a sua recíproca, chamada de “precisão” da série, que segue uma espécie de passeio aleatório multiplicativo. O LSM captou todos os fatos estilizados das séries financeiras, e os resultados foram favoráveis a sua utilização, logo, o modelo torna-se uma alternativa de especificação eficiente e parcimoniosa para estimar e prever volatilidade, na medida em que possui apenas um parâmetro a ser estimado, o que representa uma mudança de paradigma em relação aos modelos de heterocedasticidade condicional.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An economic-statistical model is developed for variable parameters (VP) (X) over bar charts in which all design parameters vary adaptively, that is, each of the design parameters (sample size, sampling interval and control-limit width) vary as a function of the most recent process information. The cost function due to controlling the process quality through a VP (X) over bar chart is derived. During the optimization of the cost function, constraints are imposed on the expected times to signal when the process is in and out of control. In this way, required statistical properties can be assured. Through a numerical example, the proposed economic-statistical design approach for VP (X) over bar charts is compared to the economic design for VP (X) over bar charts and to the economic-statistical and economic designs for fixed parameters (FP) (X) over bar charts in terms of the operating cost and the expected times to signal. From this example, it is possible to assess the benefits provided by the proposed model. Varying some input parameters, their effect on the optimal cost and on the optimal values of the design parameters was analysed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Questions: We assess gap size and shape distributions, two important descriptors of the forest disturbance regime, by asking: which statistical model best describes gap size distribution; can simple geometric forms adequately describe gap shape; does gap size or shape vary with forest type, gap age or the method used for gap delimitation; and how similar are the studied forests and other tropical and temperate forests? Location: Southeastern Atlantic Forest, Brazil. Methods: Analysing over 150 gaps in two distinct forest types (seasonal and rain forests), a model selection framework was used to select appropriate probability distributions and functions to describe gap size and gap shape. The first was described using univariate probability distributions, whereas the latter was assessed based on the gap area-perimeter relationship. Comparisons of gap size and shape between sites, as well as size and age classes were then made based on the likelihood of models having different assumptions for the values of their parameters. Results: The log-normal distribution was the best descriptor of gap size distribution, independently of the forest type or gap delimitation method. Because gaps became more irregular as they increased in size, all geometric forms (triangle, rectangle and ellipse) were poor descriptors of gap shape. Only when small and large gaps (> 100 or 400m2 depending on the delimitation method) were treated separately did the rectangle and isosceles triangle become accurate predictors of gap shape. Ellipsoidal shapes were poor descriptors. At both sites, gaps were at least 50% longer than they were wide, a finding with important implications for gap microclimate (e.g. light entrance regime) and, consequently, for gap regeneration. Conclusions: In addition to more appropriate descriptions of gap size and shape, the model selection framework used here efficiently provided a means by which to compare the patterns of two different types of forest. With this framework we were able to recommend the log-normal parameters μ and σ for future comparisons of gap size distribution, and to propose possible mechanisms related to random rates of gap expansion and closure. We also showed that gap shape varied highly and that no single geometric form was able to predict the shape of all gaps, the ellipse in particular should no longer be used as a standard gap shape. © 2012 International Association for Vegetation Science.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many of developing countries are facing crisis in water management due to increasing of population, water scarcity, water contaminations and effects of world economic crisis. Water distribution systems in developing countries are facing many challenges of efficient repair and rehabilitation since the information of water network is very limited, which makes the rehabilitation assessment plans very difficult. Sufficient information with high technology in developed countries makes the assessment for rehabilitation easy. Developing countries have many difficulties to assess the water network causing system failure, deterioration of mains and bad water quality in the network due to pipe corrosion and deterioration. The limited information brought into focus the urgent need to develop economical assessment for rehabilitation of water distribution systems adapted to water utilities. Gaza Strip is subject to a first case study, suffering from severe shortage in the water supply and environmental problems and contamination of underground water resources. This research focuses on improvement of water supply network to reduce the water losses in water network based on limited database using techniques of ArcGIS and commercial water network software (WaterCAD). A new approach for rehabilitation water pipes has been presented in Gaza city case study. Integrated rehabilitation assessment model has been developed for rehabilitation water pipes including three components; hydraulic assessment model, Physical assessment model and Structural assessment model. WaterCAD model has been developed with integrated in ArcGIS to produce the hydraulic assessment model for water network. The model have been designed based on pipe condition assessment with 100 score points as a maximum points for pipe condition. As results from this model, we can indicate that 40% of water pipeline have score points less than 50 points and about 10% of total pipes length have less than 30 score points. By using this model, the rehabilitation plans for each region in Gaza city can be achieved based on available budget and condition of pipes. The second case study is Kuala Lumpur Case from semi-developed countries, which has been used to develop an approach to improve the water network under crucial conditions using, advanced statistical and GIS techniques. Kuala Lumpur (KL) has water losses about 40% and high failure rate, which make severe problem. This case can represent cases in South Asia countries. Kuala Lumpur faced big challenges to reduce the water losses in water network during last 5 years. One of these challenges is high deterioration of asbestos cement (AC) pipes. They need to replace more than 6500 km of AC pipes, which need a huge budget to be achieved. Asbestos cement is subject to deterioration due to various chemical processes that either leach out the cement material or penetrate the concrete to form products that weaken the cement matrix. This case presents an approach for geo-statistical model for modelling pipe failures in a water distribution network. Database of Syabas Company (Kuala Lumpur water company) has been used in developing the model. The statistical models have been calibrated, verified and used to predict failures for both networks and individual pipes. The mathematical formulation developed for failure frequency in Kuala Lumpur was based on different pipeline characteristics, reflecting several factors such as pipe diameter, length, pressure and failure history. Generalized linear model have been applied to predict pipe failures based on District Meter Zone (DMZ) and individual pipe levels. Based on Kuala Lumpur case study, several outputs and implications have been achieved. Correlations between spatial and temporal intervals of pipe failures also have been done using ArcGIS software. Water Pipe Assessment Model (WPAM) has been developed using the analysis of historical pipe failure in Kuala Lumpur which prioritizing the pipe rehabilitation candidates based on ranking system. Frankfurt Water Network in Germany is the third main case study. This case makes an overview for Survival analysis and neural network methods used in water network. Rehabilitation strategies of water pipes have been developed for Frankfurt water network in cooperation with Mainova (Frankfurt Water Company). This thesis also presents a methodology of technical condition assessment of plastic pipes based on simple analysis. This thesis aims to make contribution to improve the prediction of pipe failures in water networks using Geographic Information System (GIS) and Decision Support System (DSS). The output from the technical condition assessment model can be used to estimate future budget needs for rehabilitation and to define pipes with high priority for replacement based on poor condition. rn

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.