944 resultados para log-linear models
Resumo:
It is well known that meteorological conditions influence the comfort and human health. Southern European countries, including Portugal, show the highest mortality rates during winter, but the effects of extreme cold temperatures in Portugal have never been estimated. The objective of this study was the estimation of the effect of extreme cold temperatures on the risk of death in Lisbon and Oporto, aiming the production of scientific evidence for the development of a real-time health warning system. Poisson regression models combined with distributed lag non-linear models were applied to assess the exposure-response relation and lag patterns of the association between minimum temperature and all-causes mortality and between minimum temperature and circulatory and respiratory system diseases mortality from 1992 to 2012, stratified by age, for the period from November to March. The analysis was adjusted for over dispersion and population size, for the confounding effect of influenza epidemics and controlled for long-term trend, seasonality and day of the week. Results showed that the effect of cold temperatures in mortality was not immediate, presenting a 1–2-day delay, reaching maximumincreased risk of death after 6–7 days and lasting up to 20–28 days. The overall effect was generally higher and more persistent in Lisbon than in Oporto, particularly for circulatory and respiratory mortality and for the elderly. Exposure to cold temperatures is an important public health problem for a relevant part of the Portuguese population, in particular in Lisbon.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Numerous studies in the last 60 years have investigated the relationship between land slope and soil erosion rates. However, relatively few of these have investigated slope gradient responses: ( a) for steep slopes, (b) for specific erosion processes, and ( c) as a function of soil properties. Simulated rainfall was applied in the laboratory on 16 soils and 16 overburdens at 100 mm/h to 3 replicates of unconsolidated flume plots 3 m long by 0.8 m wide and 0.15 m deep at slopes of 20, 5, 10, 15, and 30% slope in that order. Sediment delivery at each slope was measured to determine the relationship between slope steepness and erosion rate. Data from this study were evaluated alongside data and existing slope adjustment functions from more than 55 other studies from the literature. Data and the literature strongly support a logistic slope adjustment function of the form S = A + B/[1 + exp (C - D sin theta)] where S is the slope adjustment factor and A, B, C, and D are coefficients that depend on the dominant detachment and transport processes. Average coefficient values when interill-only processes are active are A - 1.50, B 6.51, C 0.94, and D 5.30 (r(2) = 0.99). When rill erosion is also potentially active, the average slope response is greater and coefficient values are A - 1.12, B 16.05, C 2.61, and D 8.32 (r(2) = 0.93). The interill-only function predicts increases in sediment delivery rates from 5 to 30% slope that are approximately double the predictions based on existing published interill functions. The rill + interill function is similar to a previously reported value. The above relationships represent a mean slope response for all soils, yet the response of individual soils varied substantially from a 2.5-fold to a 50-fold increase over the range of slopes studied. The magnitude of the slope response was found to be inversely related ( log - log linear) to the dispersed silt and clay content of the soil, and 3 slope adjustment equations are proposed that provide a better estimate of slope response when this soil property is known. Evaluation of the slope adjustment equations proposed in this paper using independent datasets showed that the new equations can improve soil erosion predictions.
Resumo:
Background and Aims Plants regulate their architecture strongly in response to density, and there is evidence that this involves changes in the duration of leaf extension. This questions the approximation, central in crop models, that development follows a fixed thermal time schedule. The aim of this research is to investigate, using maize as a model, how the kinetics of extension of grass leaves change with density, and to propose directions for inclusion of this regulation in plant models. • Methods Periodic dissection of plants allowed the establishment of the kinetics of lamina and sheath extension for two contrasting sowing densities. The temperature of the growing zone was measured with thermocouples. Two-phase (exponential plus linear) models were fitted to the data, allowing analysis of the timing of the phase changes of extension, and the extension rate of sheaths and blades during both phases. • Key Results The duration of lamina extension dictated the variation in lamina length between treatments. The lower phytomers were longer at high density, with delayed onset of sheath extension allowing more time for the lamina to extend. In the upper phytomers—which were shorter at high density—the laminae had a lower relative extension rate (RER) in the exponential phase and delayed onset of linear extension, and less time available for extension since early sheath extension was not delayed. • Conclusions The relative timing of the onset of fast extension of the lamina with that of sheath development is the main determinant of the response of lamina length to density. Evidence is presented that the contrasting behaviour of lower and upper phytomers is related to differing regulation of sheath ontogeny before and after panicle initiation. A conceptual model is proposed to explain how the observed asynchrony between lamina and sheath development is regulated.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.
Resumo:
Radial Basis Function networks with linear outputs are often used in regression problems because they can be substantially faster to train than Multi-layer Perceptrons. For classification problems, the use of linear outputs is less appropriate as the outputs are not guaranteed to represent probabilities. In this paper we show how RBFs with logistic and softmax outputs can be trained efficiently using algorithms derived from Generalised Linear Models. This approach is compared with standard non-linear optimisation algorithms on a number of datasets.
Resumo:
The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
FDI plays a key role in development, particularly in resource-constrained transition economies of Central and Eastern Europe with relatively low savings rates. Gains from technology transfer play a critical role in motivating FDI, yet potential for it may be hampered by a large technology gap between the source and host country. While the extent of this gap has traditionally been attributed to education, skills and capital intensity, recent literature has also emphasized the possible role of institutional environment in this respect. Despite tremendous interest among policy-makers and academics to understand the factors attracting FDI (Bevan and Estrin, 2000; Globerman and Shapiro, 2003) our knowledge about the effects of institutions on the location choice and ownership structure of foreign firms remains limited. This paper attempts to fill this gap in the literature by examining the link between institutions and foreign ownership structures. To the best of our knowledge, Javorcik (2004) is the only papers, which use firm-level data to analyse the role of institutional quality on an outward investor’s entry mode in transition countries. Our paper extends Javorcik (2004) in a number of ways: (a) rather than a cross-section, we use panel data for the period 1997-2006; (b) rather than a binary variable, we use the percentage foreign ownership as continuous variable; (c) we consider multi-dimensional institutional variables, such as corruption, intellectual property rights protection and government stability. We also use factor analysis to generate a composite index of institutional quality and see how stronger institutional environment could affect foreign ownership; (d) we explore how the distance between institutional environment in source and host countries affect foreign ownership in a host country. The firm-level data used includes both domestic and foreign firms for the period 1997-2006 and is drawn from ORBIS, a commercially available dataset provided by Bureau van Dijk. In order to examine the link between institutions and foreign ownership structures, we estimate four log-linear ownership equations/specifications augmented by institutional and other control variables. We find evidence that the decision of a foreign firm to either locate its subsidiary or acquire an existing domestic firm depends not only on factor cost differences but also on differences in institutional environment between the host and source countries.
Resumo:
This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.