925 resultados para MIXED LINEAR-MODELS
Resumo:
Common bean is a major dietary component in several countries, but its productivity is negatively affected by abiotic stresses. Dissecting candidate genes involved in abiotic stress tolerance is a paramount step toward the improvement of common bean performance under such constraints. Thereby, this thesis presents a systematic analysis of the DEHYDRATION RESPONSIVE ELEMENT-BINDING (DREB) gene subfamily, which encompasses genes that regulate several processes during stress responses, but with limited information for common bean. First, a series of in silico analyses with sequences retrieved from the P. vulgaris genome on Phytozome supported the categorization of 54 putative PvDREB genes distributed within six phylogenetic subgroups (A-1 to A-6), along the 11 chromosomes. Second, we cloned four novel PvDREB genes and determined their inducibility-factors, including the dehydration-, salinity- and cold-inducible genes PvDREB1F and PvDREB5A, and the dehydration- and cold-inducible genes PvDREB2A and PvDREB6B. Afterwards, nucleotide polymorphisms were searched through Sanger sequencing along those genes, revealing a high number of single nucleotide polymorphisms within PvDREB6B by the comparison of Mesoamerican and Andean genotypes. The nomenclature of PvDREB6B is discussed in details. Furthermore, we used the BARCBean6K_3 SNP platform to identify and genotype the closest SNP to each one of the 54 PvDREB genes. We selected PvDREB6B for a broader study encompassing a collection of wild common bean accessions of Mesoamerican origin. The population structure of the wild beans was accessed using sequence polymorphisms of PvDREB6B. The genetic clusters were partially associated with variation in latitude, altitude, precipitation and temperature throughout the areas such beans are distributed. With an emphasis on drought stress, an adapted tube-screening method in greenhouse conditions enabled the phenotyping of several drought-related traits in the wild collection. Interestingly, our data revealed a correlation between root depth, plant height and biomass and the environmental data of the location of the accessions. Correlation was also observed between the population structure determined through PvDREB6B and the environmental data. An association study combining data from the SNP array and DREB polymorphisms enabled the detection of SNP associated with drought-related traits through a compressed mixed linear model (CMLM) analysis. This thesis highlighted important features of DREB genes in common bean, revealing candidates for further strategies aimed at improvement of abiotic stress tolerance, with emphasis on drought tolerance
Resumo:
The objective of this study is to test the effect of the consumer’s variety-seeking behaviour on the distance the tourist is prepared to travel; that is, his/her willingness to travel further. The empirical application is carried out in Spain in a context with 26 destinations, by applying Mixed Logit Models. The results evidence that the variety-seeking behaviour reduces the dissuasive effect of distance.
Resumo:
The paper presents an analytical review of the literature, which reflects the results of national and foreign scientific researches aimed to studying the features of the composition and dosage of components of self compacting concrete as one of the most promising aggregate for modern composite structures. In addition, the results of numerical and experimental researches of stress-strain state of composite structures (concrete-filled tubes) under the influence of various power factors, have been considered. The description and features of existing analytical methods for the determination of the bearing capacity of the considered structures under compression and bendings, have been given. The analysis of deformation model of confined concrete in a composition of the composite structure, as well as non-linear models of steel works with their distinctive features, has been carried out. The main approaches to the finite element modeling of composite structures have been determined.
Resumo:
It is well known that meteorological conditions influence the comfort and human health. Southern European countries, including Portugal, show the highest mortality rates during winter, but the effects of extreme cold temperatures in Portugal have never been estimated. The objective of this study was the estimation of the effect of extreme cold temperatures on the risk of death in Lisbon and Oporto, aiming the production of scientific evidence for the development of a real-time health warning system. Poisson regression models combined with distributed lag non-linear models were applied to assess the exposure-response relation and lag patterns of the association between minimum temperature and all-causes mortality and between minimum temperature and circulatory and respiratory system diseases mortality from 1992 to 2012, stratified by age, for the period from November to March. The analysis was adjusted for over dispersion and population size, for the confounding effect of influenza epidemics and controlled for long-term trend, seasonality and day of the week. Results showed that the effect of cold temperatures in mortality was not immediate, presenting a 1–2-day delay, reaching maximumincreased risk of death after 6–7 days and lasting up to 20–28 days. The overall effect was generally higher and more persistent in Lisbon than in Oporto, particularly for circulatory and respiratory mortality and for the elderly. Exposure to cold temperatures is an important public health problem for a relevant part of the Portuguese population, in particular in Lisbon.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
To account for the preponderance of zero counts and simultaneous correlation of observations, a class of zero-inflated Poisson mixed regression models is applicable for accommodating the within-cluster dependence. In this paper, a score test for zero-inflation is developed for assessing correlated count data with excess zeros. The sampling distribution and the power of the test statistic are evaluated by simulation studies. The results show that the test statistic performs satisfactorily under a wide range of conditions. The test procedure is further illustrated using a data set on recurrent urinary tract infections. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Bone cell cultures were evaluated to determine if osteogenic cell populations at different skeletal sites in the horse are heterogeneous. Osteogenic cells were isolated from cortical and cancellous bone in vitro by an explant culture method. Subcultured cells were induced to differentiate into bone-forming osteoblasts. The osteoblast phenotype was confirmed by immunohistochemical testing for osteocalcin and substantiated by positive staining of cells for alkaline phosphatase and the matrix materials collagen and glycosaminoglycans. Bone nodules were stained by the von Kossa method and counted. The numbers of nodules produced from osteogenic cells harvested from different skeletal sites were compared with the use of a mixed linear model. On average, cortical bone sites yielded significantly greater numbers of nodules than did cancellous bone sites. Between cortical bone sites, there was no significant difference in nodule numbers. Among cancellous sites, the radial cancellous bone yielded significantly more nodules than did the tibial cancellous bone. Among appendicular skeletal sites, tibial metaphyseal bone yielded significantly fewer nodules than did all other long bone sites. This study detected evidence of heterogeneity of equine osteogenic cell populations at various skeletal sites. Further characterization of the dissimilarities is warranted to determine the potential role heterogeneity plays in differential rates of fracture healing between skeletal sites.
Resumo:
Background and Aims Plants regulate their architecture strongly in response to density, and there is evidence that this involves changes in the duration of leaf extension. This questions the approximation, central in crop models, that development follows a fixed thermal time schedule. The aim of this research is to investigate, using maize as a model, how the kinetics of extension of grass leaves change with density, and to propose directions for inclusion of this regulation in plant models. • Methods Periodic dissection of plants allowed the establishment of the kinetics of lamina and sheath extension for two contrasting sowing densities. The temperature of the growing zone was measured with thermocouples. Two-phase (exponential plus linear) models were fitted to the data, allowing analysis of the timing of the phase changes of extension, and the extension rate of sheaths and blades during both phases. • Key Results The duration of lamina extension dictated the variation in lamina length between treatments. The lower phytomers were longer at high density, with delayed onset of sheath extension allowing more time for the lamina to extend. In the upper phytomers—which were shorter at high density—the laminae had a lower relative extension rate (RER) in the exponential phase and delayed onset of linear extension, and less time available for extension since early sheath extension was not delayed. • Conclusions The relative timing of the onset of fast extension of the lamina with that of sheath development is the main determinant of the response of lamina length to density. Evidence is presented that the contrasting behaviour of lower and upper phytomers is related to differing regulation of sheath ontogeny before and after panicle initiation. A conceptual model is proposed to explain how the observed asynchrony between lamina and sheath development is regulated.
Resumo:
Parkinson's disease (PD) is associated with disturbances in sentence processing, particularly for noncanonical sentences. The present study aimed to analyse sentence processing in PD patients and healthy control participants, using a word-by-word self-paced reading task and an auditory comprehension task. Both tasks consisted of subject relative (SR) and object relative (OR) sentences, with comprehension accuracy measured for each sentence type. For the self-paced reading task, reading times (RTs) were also recorded for the non-critical and critical processing regions of each sentence. Analysis of RTs using mixed linear model statistics revealed a delayed sensitivity to the critical processing region of OR sentences in the PD group. In addition, only the PD group demonstrated significantly poorer comprehension of OR sentences compared to SR sentences during an auditory comprehension task. These results may be consistent with slower lexical retrieval in PD, and its influence on the processing of noncanonical sentences. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
Radial Basis Function networks with linear outputs are often used in regression problems because they can be substantially faster to train than Multi-layer Perceptrons. For classification problems, the use of linear outputs is less appropriate as the outputs are not guaranteed to represent probabilities. In this paper we show how RBFs with logistic and softmax outputs can be trained efficiently using algorithms derived from Generalised Linear Models. This approach is compared with standard non-linear optimisation algorithms on a number of datasets.
Resumo:
The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.