836 resultados para Mixed linear models
Resumo:
It is well known that meteorological conditions influence the comfort and human health. Southern European countries, including Portugal, show the highest mortality rates during winter, but the effects of extreme cold temperatures in Portugal have never been estimated. The objective of this study was the estimation of the effect of extreme cold temperatures on the risk of death in Lisbon and Oporto, aiming the production of scientific evidence for the development of a real-time health warning system. Poisson regression models combined with distributed lag non-linear models were applied to assess the exposure-response relation and lag patterns of the association between minimum temperature and all-causes mortality and between minimum temperature and circulatory and respiratory system diseases mortality from 1992 to 2012, stratified by age, for the period from November to March. The analysis was adjusted for over dispersion and population size, for the confounding effect of influenza epidemics and controlled for long-term trend, seasonality and day of the week. Results showed that the effect of cold temperatures in mortality was not immediate, presenting a 1–2-day delay, reaching maximumincreased risk of death after 6–7 days and lasting up to 20–28 days. The overall effect was generally higher and more persistent in Lisbon than in Oporto, particularly for circulatory and respiratory mortality and for the elderly. Exposure to cold temperatures is an important public health problem for a relevant part of the Portuguese population, in particular in Lisbon.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
To account for the preponderance of zero counts and simultaneous correlation of observations, a class of zero-inflated Poisson mixed regression models is applicable for accommodating the within-cluster dependence. In this paper, a score test for zero-inflation is developed for assessing correlated count data with excess zeros. The sampling distribution and the power of the test statistic are evaluated by simulation studies. The results show that the test statistic performs satisfactorily under a wide range of conditions. The test procedure is further illustrated using a data set on recurrent urinary tract infections. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Bone cell cultures were evaluated to determine if osteogenic cell populations at different skeletal sites in the horse are heterogeneous. Osteogenic cells were isolated from cortical and cancellous bone in vitro by an explant culture method. Subcultured cells were induced to differentiate into bone-forming osteoblasts. The osteoblast phenotype was confirmed by immunohistochemical testing for osteocalcin and substantiated by positive staining of cells for alkaline phosphatase and the matrix materials collagen and glycosaminoglycans. Bone nodules were stained by the von Kossa method and counted. The numbers of nodules produced from osteogenic cells harvested from different skeletal sites were compared with the use of a mixed linear model. On average, cortical bone sites yielded significantly greater numbers of nodules than did cancellous bone sites. Between cortical bone sites, there was no significant difference in nodule numbers. Among cancellous sites, the radial cancellous bone yielded significantly more nodules than did the tibial cancellous bone. Among appendicular skeletal sites, tibial metaphyseal bone yielded significantly fewer nodules than did all other long bone sites. This study detected evidence of heterogeneity of equine osteogenic cell populations at various skeletal sites. Further characterization of the dissimilarities is warranted to determine the potential role heterogeneity plays in differential rates of fracture healing between skeletal sites.
Resumo:
Background and Aims Plants regulate their architecture strongly in response to density, and there is evidence that this involves changes in the duration of leaf extension. This questions the approximation, central in crop models, that development follows a fixed thermal time schedule. The aim of this research is to investigate, using maize as a model, how the kinetics of extension of grass leaves change with density, and to propose directions for inclusion of this regulation in plant models. • Methods Periodic dissection of plants allowed the establishment of the kinetics of lamina and sheath extension for two contrasting sowing densities. The temperature of the growing zone was measured with thermocouples. Two-phase (exponential plus linear) models were fitted to the data, allowing analysis of the timing of the phase changes of extension, and the extension rate of sheaths and blades during both phases. • Key Results The duration of lamina extension dictated the variation in lamina length between treatments. The lower phytomers were longer at high density, with delayed onset of sheath extension allowing more time for the lamina to extend. In the upper phytomers—which were shorter at high density—the laminae had a lower relative extension rate (RER) in the exponential phase and delayed onset of linear extension, and less time available for extension since early sheath extension was not delayed. • Conclusions The relative timing of the onset of fast extension of the lamina with that of sheath development is the main determinant of the response of lamina length to density. Evidence is presented that the contrasting behaviour of lower and upper phytomers is related to differing regulation of sheath ontogeny before and after panicle initiation. A conceptual model is proposed to explain how the observed asynchrony between lamina and sheath development is regulated.
Resumo:
Parkinson's disease (PD) is associated with disturbances in sentence processing, particularly for noncanonical sentences. The present study aimed to analyse sentence processing in PD patients and healthy control participants, using a word-by-word self-paced reading task and an auditory comprehension task. Both tasks consisted of subject relative (SR) and object relative (OR) sentences, with comprehension accuracy measured for each sentence type. For the self-paced reading task, reading times (RTs) were also recorded for the non-critical and critical processing regions of each sentence. Analysis of RTs using mixed linear model statistics revealed a delayed sensitivity to the critical processing region of OR sentences in the PD group. In addition, only the PD group demonstrated significantly poorer comprehension of OR sentences compared to SR sentences during an auditory comprehension task. These results may be consistent with slower lexical retrieval in PD, and its influence on the processing of noncanonical sentences. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
Radial Basis Function networks with linear outputs are often used in regression problems because they can be substantially faster to train than Multi-layer Perceptrons. For classification problems, the use of linear outputs is less appropriate as the outputs are not guaranteed to represent probabilities. In this paper we show how RBFs with logistic and softmax outputs can be trained efficiently using algorithms derived from Generalised Linear Models. This approach is compared with standard non-linear optimisation algorithms on a number of datasets.
Resumo:
The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.