964 resultados para variance component models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a series of seminal articles in 1974, 1975, and 1977, J. H. Gillespie challenged the notion that the "fittest" individuals are those that produce on average the highest number of offspring. He showed that in small populations, the variance in fecundity can determine fitness as much as mean fecundity. One likely reason why Gillespie's concept of within-generation bet hedging has been largely ignored is the general consensus that natural populations are of large size. As a consequence, essentially no work has investigated the role of the fecundity variance on the evolutionary stable state of life-history strategies. While typically large, natural populations also tend to be subdivided in local demes connected by migration. Here, we integrate Gillespie's measure of selection for within-generation bet hedging into the inclusive fitness and game theoretic measure of selection for structured populations. The resulting framework demonstrates that selection against high variance in offspring number is a potent force in large, but structured populations. More generally, the results highlight that variance in offspring number will directly affect various life-history strategies, especially those involving kin interaction. The selective pressures on three key traits are directly investigated here, namely within-generation bet hedging, helping behaviors, and the evolutionary stable dispersal rate. The evolutionary dynamics of all three traits are markedly affected by variance in offspring number, although to a different extent and under different demographic conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The knowledge of the relationship that links radiation dose and image quality is a prerequisite to any optimization of medical diagnostic radiology. Image quality depends, on the one hand, on the physical parameters such as contrast, resolution, and noise, and on the other hand, on characteristics of the observer that assesses the image. While the role of contrast and resolution is precisely defined and recognized, the influence of image noise is not yet fully understood. Its measurement is often based on imaging uniform test objects, even though real images contain anatomical backgrounds whose statistical nature is much different from test objects used to assess system noise. The goal of this study was to demonstrate the importance of variations in background anatomy by quantifying its effect on a series of detection tasks. Several types of mammographic backgrounds and signals were examined by psychophysical experiments in a two-alternative forced-choice detection task. According to hypotheses concerning the strategy used by the human observers, their signal to noise ratio was determined. This variable was also computed for a mathematical model based on the statistical decision theory. By comparing theoretical model and experimental results, the way that anatomical structure is perceived has been analyzed. Experiments showed that the observer's behavior was highly dependent upon both system noise and the anatomical background. The anatomy partly acts as a signal recognizable as such and partly as a pure noise that disturbs the detection process. This dual nature of the anatomy is quantified. It is shown that its effect varies according to its amplitude and the profile of the object being detected. The importance of the noisy part of the anatomy is, in some situations, much greater than the system noise. Hence, reducing the system noise by increasing the dose will not improve task performance. This observation indicates that the tradeoff between dose and image quality might be optimized by accepting a higher system noise. This could lead to a better resolution, more contrast, or less dose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep spindles are approximately 1 s bursts of 10-16 Hz activity that occur during stage 2 sleep. Spindles are highly synchronous across the cortex and thalamus in animals, and across the scalp in humans, implying correspondingly widespread and synchronized cortical generators. However, prior studies have noted occasional dissociations of the magnetoencephalogram (MEG) from the EEG during spindles, although detailed studies of this phenomenon have been lacking. We systematically compared high-density MEG and EEG recordings during naturally occurring spindles in healthy humans. As expected, EEG was highly coherent across the scalp, with consistent topography across spindles. In contrast, the simultaneously recorded MEG was not synchronous, but varied strongly in amplitude and phase across locations and spindles. Overall, average coherence between pairs of EEG sensors was approximately 0.7, whereas MEG coherence was approximately 0.3 during spindles. Whereas 2 principle components explained approximately 50% of EEG spindle variance, >15 were required for MEG. Each PCA component for MEG typically involved several widely distributed locations, which were relatively coherent with each other. These results show that, in contrast to current models based on animal experiments, multiple asynchronous neural generators are active during normal human sleep spindles and are visible to MEG. It is possible that these multiple sources may overlap sufficiently in different EEG sensors to appear synchronous. Alternatively, EEG recordings may reflect diffusely distributed synchronous generators that are less visible to MEG. An intriguing possibility is that MEG preferentially records from the focal core thalamocortical system during spindles, and EEG from the distributed matrix system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: The expansion of a recovering population - whether re-introduced or spontaneously returning - is shaped by (i) biological (intrinsic) factors such as the land tenure system or dispersal, (ii) the distribution and availability of resources (e.g. prey), (iii) habitat and landscape features, and (iv) human attitudes and activities. In order to develop efficient conservation and recovery strategies, we need to understand all these factors and to predict the potential distribution and explore ways to reach it. An increased number of lynx in the north-western Swiss Alps in the nineties lead to a new controversy about the return of this cat. When the large carnivores were given legal protection in many European countries, most organizations and individuals promoting their protection did not foresee the consequences. Management plans describing how to handle conflicts with large predators are needed to find a balance between "overabundance" and extinction. Wildlife and conservation biologists need to evaluate the various threats confronting populations so that adequate management decisions can be taken. I developed a GIS probability model for the lynx, based on habitat information and radio-telemetry data from the Swiss Jura Mountains, in order to predict the potential distribution of the lynx in this mountain range, which is presently only partly occupied by lynx. Three of the 18 variables tested for each square kilometre describing land use, vegetation, and topography, qualified to predict the probability of lynx presence. The resulting map was evaluated with data from dispersing subadult lynx. Young lynx that were not able to establish home ranges in what was identified as good lynx habitat did not survive their first year of independence, whereas the only one that died in good lynx habitat was illegally killed. Radio-telemetry fixes are often used as input data to calibrate habitat models. Radio-telemetry is the only way to gather accurate and unbiased data on habitat use of elusive larger terrestrial mammals. However, it is time consuming and expensive, and can therefore only be applied in limited areas. Habitat models extrapolated over large areas can in turn be problematic, as habitat characteristics and availability may change from one area to the other. I analysed the predictive power of Ecological Niche Factor Analysis (ENFA) in Switzerland with the lynx as focal species. According to my results, the optimal sampling strategy to predict species distribution in an Alpine area lacking available data would be to pool presence cells from contrasted regions (Jura Mountains, Alps), whereas in regions with a low ecological variance (Jura Mountains), only local presence cells should be used for the calibration of the model. Dispersal influences the dynamics and persistence of populations, the distribution and abundance of species, and gives the communities and ecosystems their characteristic texture in space and time. Between 1988 and 2001, the spatio-temporal behaviour of subadult Eurasian lynx in two re-introduced populations in Switzerland was studied, based on 39 juvenile lynx of which 24 were radio-tagged to understand the factors influencing dispersal. Subadults become independent from their mothers at the age of 8-11 months. No sex bias neither in the dispersal rate nor in the distance moved was detected. Lynx are conservative dispersers, compared to bear and wolf, and settled within or close to known lynx occurrences. Dispersal distances reached in the high lynx density population - shorter than those reported in other Eurasian lynx studies - are limited by habitat restriction hindering connections with neighbouring metapopulations. I postulated that high lynx density would lead to an expansion of the population and validated my predictions with data from the north-western Swiss Alps where about 1995 a strong increase in lynx abundance took place. The general hypothesis that high population density will foster the expansion of the population was not confirmed. This has consequences for the re-introduction and recovery of carnivores in a fragmented landscape. To establish a strong source population in one place might not be an optimal strategy. Rather, population nuclei should be founded in several neighbouring patches. Exchange between established neighbouring subpopulations will later on take place, as adult lynx show a higher propensity to cross barriers than subadults. To estimate the potential population size of the lynx in the Jura Mountains and to assess possible corridors between this population and adjacent areas, I adapted a habitat probability model for lynx distribution in the Jura Mountains with new environmental data and extrapolated it over the entire mountain range. The model predicts a breeding population ranging from 74-101 individuals and from 51-79 individuals when continuous habitat patches < 50 km2 are disregarded. The Jura Mountains could once be part of a metapopulation, as potential corridors exist to the adjoining areas (Alps, Vosges Mountains, and Black Forest). Monitoring of the population size, spatial expansion, and the genetic surveillance in the Jura Mountains must be continued, as the status of the population is still critical. ENFA was used to predict the potential distribution of lynx in the Alps. The resulting model divided the Alps into 37 suitable habitat patches ranging from 50 to 18,711 km2, covering a total area of about 93,600 km2. When using the range of lynx densities found in field studies in Switzerland, the Alps could host a population of 961 to 1,827 residents. The results of the cost-distance analysis revealed that all patches were within the reach of dispersing lynx, as the connection costs were in the range of dispersal cost of radio-tagged subadult lynx moving through unfavorable habitat. Thus, the whole Alps could once be considered as a metapopulation. But experience suggests that only few disperser will cross unsuitable areas and barriers. This low migration rate may seldom allow the spontaneous foundation of new populations in unsettled areas. As an alternative to natural dispersal, artificial transfer of individuals across the barriers should be considered. Wildlife biologists can play a crucial role in developing adaptive management experiments to help managers learning by trial. The case of the lynx in Switzerland is a good example of a fruitful cooperation between wildlife biologists, managers, decision makers and politician in an adaptive management process. This cooperation resulted in a Lynx Management Plan which was implemented in 2000 and updated in 2004 to give the cantons directives on how to handle lynx-related problems. This plan was put into practice e.g. in regard to translocation of lynx into unsettled areas. Résumé: L'expansion d'une population en phase de recolonisation, qu'elle soit issue de réintroductions ou d'un retour naturel dépend 1) de facteurs biologiques tels que le système social et le mode de dispersion, 2) de la distribution et la disponibilité des ressources (proies), 3) de l'habitat et des éléments du paysage, 4) de l'acceptation de l'espèce par la population locale et des activités humaines. Afin de pouvoir développer des stratégies efficaces de conservation et de favoriser la recolonisation, chacun de ces facteurs doit être pris en compte. En plus, la distribution potentielle de l'espèce doit pouvoir être déterminée et enfin, toutes les possibilités pour atteindre les objectifs, examinées. La phase de haute densité que la population de lynx a connue dans les années nonante dans le nord-ouest des Alpes suisses a donné lieu à une controverse assez vive. La protection du lynx dans de nombreux pays européens, promue par différentes organisations, a entraîné des conséquences inattendues; ces dernières montrent que tout plan de gestion doit impérativement indiquer des pistes quant à la manière de gérer les conflits, tout en trouvant un équilibre entre l'extinction et la surabondance de l'espèce. Les biologistes de la conservation et de la faune sauvage doivent pour cela évaluer les différents risques encourus par les populations de lynx, afin de pouvoir rapidement prendre les meilleuresmdécisions de gestion. Un modèle d'habitat pour le lynx, basé sur des caractéristiques de l'habitat et des données radio télémétriques collectées dans la chaîne du Jura, a été élaboré afin de prédire la distribution potentielle dans cette région, qui n'est que partiellement occupée par l'espèce. Trois des 18 variables testées, décrivant pour chaque kilomètre carré l'utilisation du sol, la végétation ainsi que la topographie, ont été retenues pour déterminer la probabilité de présence du lynx. La carte qui en résulte a été comparée aux données télémétriques de lynx subadultes en phase de dispersion. Les jeunes qui n'ont pas pu établir leur domaine vital dans l'habitat favorable prédit par le modèle n'ont pas survécu leur première année d'indépendance alors que le seul individu qui est mort dans l'habitat favorable a été braconné. Les données radio-télémétriques sont souvent utilisées pour l'étalonnage de modèles d'habitat. C'est un des seuls moyens à disposition qui permette de récolter des données non biaisées et précises sur l'occupation de l'habitat par des mammifères terrestres aux moeurs discrètes. Mais ces méthodes de- mandent un important investissement en moyens financiers et en temps et peuvent, de ce fait, n'être appliquées qu'à des zones limitées. Les modèles d'habitat sont ainsi souvent extrapolés à de grandes surfaces malgré le risque d'imprécision, qui résulte des variations des caractéristiques et de la disponibilité de l'habitat d'une zone à l'autre. Le pouvoir de prédiction de l'Analyse Ecologique de la Niche (AEN) dans les zones où les données de présence n'ont pas été prises en compte dans le calibrage du modèle a été analysée dans le cas du lynx en Suisse. D'après les résultats obtenus, la meilleure mé- thode pour prédire la distribution du lynx dans une zone alpine dépourvue d'indices de présence est de combiner des données provenant de régions contrastées (Alpes, Jura). Par contre, seules les données sur la présence locale de l'espèce doivent être utilisées pour les zones présentant une faible variance écologique tel que le Jura. La dispersion influence la dynamique et la stabilité des populations, la distribution et l'abondance des espèces et détermine les caractéristiques spatiales et temporelles des communautés vivantes et des écosystèmes. Entre 1988 et 2001, le comportement spatio-temporel de lynx eurasiens subadultes de deux populations réintroduites en Suisse a été étudié, basé sur le suivi de 39 individus juvéniles dont 24 étaient munis d'un collier émetteur, afin de déterminer les facteurs qui influencent la dispersion. Les subadultes se sont séparés de leur mère à l'âge de 8 à 11 mois. Le sexe n'a pas eu d'influence sur le nombre d'individus ayant dispersés et la distance parcourue au cours de la dispersion. Comparé à l'ours et au loup, le lynx reste très modéré dans ses mouvements de dispersion. Tous les individus ayant dispersés se sont établis à proximité ou dans des zones déjà occupées par des lynx. Les distances parcourues lors de la dispersion ont été plus courtes pour la population en phase de haute densité que celles relevées par les autres études de dispersion du lynx eurasien. Les zones d'habitat peu favorables et les barrières qui interrompent la connectivité entre les populations sont les principales entraves aux déplacements, lors de la dispersion. Dans un premier temps, nous avons fait l'hypothèse que les phases de haute densité favorisaient l'expansion des populations. Mais cette hypothèse a été infirmée par les résultats issus du suivi des lynx réalisé dans le nord-ouest des Alpes, où la population connaissait une phase de haute densité depuis 1995. Ce constat est important pour la conservation d'une population de carnivores dans un habitat fragmenté. Ainsi, instaurer une forte population source à un seul endroit n'est pas forcément la stratégie la plus judicieuse. Il est préférable d'établir des noyaux de populations dans des régions voisines où l'habitat est favorable. Des échanges entre des populations avoisinantes pourront avoir lieu par la suite car les lynx adultes sont plus enclins à franchir les barrières qui entravent leurs déplacements que les individus subadultes. Afin d'estimer la taille de la population de lynx dans le Jura et de déterminer les corridors potentiels entre cette région et les zones avoisinantes, un modèle d'habitat a été utilisé, basé sur un nouveau jeu de variables environnementales et extrapolé à l'ensemble du Jura. Le modèle prédit une population reproductrice de 74 à 101 individus et de 51 à 79 individus lorsque les surfaces d'habitat d'un seul tenant de moins de 50 km2 sont soustraites. Comme des corridors potentiels existent effectivement entre le Jura et les régions avoisinantes (Alpes, Vosges, et Forêt Noire), le Jura pourrait faire partie à l'avenir d'une métapopulation, lorsque les zones avoisinantes seront colonisées par l'espèce. La surveillance de la taille de la population, de son expansion spatiale et de sa structure génétique doit être maintenue car le statut de cette population est encore critique. L'AEN a également été utilisée pour prédire l'habitat favorable du lynx dans les Alpes. Le modèle qui en résulte divise les Alpes en 37 sous-unités d'habitat favorable dont la surface varie de 50 à 18'711 km2, pour une superficie totale de 93'600 km2. En utilisant le spectre des densités observées dans les études radio-télémétriques effectuées en Suisse, les Alpes pourraient accueillir une population de lynx résidents variant de 961 à 1'827 individus. Les résultats des analyses de connectivité montrent que les sous-unités d'habitat favorable se situent à des distances telles que le coût de la dispersion pour l'espèce est admissible. L'ensemble des Alpes pourrait donc un jour former une métapopulation. Mais l'expérience montre que très peu d'individus traverseront des habitats peu favorables et des barrières au cours de leur dispersion. Ce faible taux de migration rendra difficile toute nouvelle implantation de populations dans des zones inoccupées. Une solution alternative existe cependant : transférer artificiellement des individus d'une zone à l'autre. Les biologistes spécialistes de la faune sauvage peuvent jouer un rôle important et complémentaire pour les gestionnaires de la faune, en les aidant à mener des expériences de gestion par essai. Le cas du lynx en Suisse est un bel exemple d'une collaboration fructueuse entre biologistes de la faune sauvage, gestionnaires, organes décisionnaires et politiciens. Cette coopération a permis l'élaboration du Concept Lynx Suisse qui est entré en vigueur en 2000 et remis à jour en 2004. Ce plan donne des directives aux cantons pour appréhender la problématique du lynx. Il y a déjà eu des applications concrètes sur le terrain, notamment par des translocations d'individus dans des zones encore inoccupées.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to assess the degree of multicollinearity and to identify the variables involved in linear dependence relations in additive-dominant models. Data of birth weight (n=141,567), yearling weight (n=58,124), and scrotal circumference (n=20,371) of Montana Tropical composite cattle were used. Diagnosis of multicollinearity was based on the variance inflation factor (VIF) and on the evaluation of the condition indexes and eigenvalues from the correlation matrix among explanatory variables. The first model studied (RM) included the fixed effect of dam age class at calving and the covariates associated to the direct and maternal additive and non-additive effects. The second model (R) included all the effects of the RM model except the maternal additive effects. Multicollinearity was detected in both models for all traits considered, with VIF values of 1.03 - 70.20 for RM and 1.03 - 60.70 for R. Collinearity increased with the increase of variables in the model and the decrease in the number of observations, and it was classified as weak, with condition index values between 10.00 and 26.77. In general, the variables associated with additive and non-additive effects were involved in multicollinearity, partially due to the natural connection between these covariables as fractions of the biological types in breed composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524) of test-day milk yield (TDMY) from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects), whereas the contemporary group, calving age (linear and quadratic effects) and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterizing the risks posed by nanomaterials is extraordinarily complex because these materials can have a wide range of sizes, shapes, chemical compositions and surface modifications, all of which may affect toxicity. There is an urgent need for a testing strategy that can rapidly and efficiently provide a screening approach for evaluating the potential hazard of nanomaterials and inform the prioritization of additional toxicological testing where necessary. Predictive toxicity models could form an integral component of such an approach by predicting which nanomaterials, as a result of their physico-chemical characteristics, have potentially hazardous properties. Strategies for directing research towards predictive models and the ancillary benefits of such research are presented here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Falls are common in the elderly, and potentially result in injury and disability. Thus, preventing falls as soon as possible in older adults is a public health priority, yet there is no specific marker that is predictive of the first fall onset. We hypothesized that gait features should be the most relevant variables for predicting the first fall. Clinical baseline characteristics (e.g., gender, cognitive function) were assessed in 259 home-dwelling people aged 66 to 75 that had never fallen. Likewise, global kinetic behavior of gait was recorded from 22 variables in 1036 walking tests with an accelerometric gait analysis system. Afterward, monthly telephone monitoring reported the date of the first fall over 24 months. A principal components analysis was used to assess the relationship between gait variables and fall status in four groups: non-fallers, fallers from 0 to 6 months, fallers from 6 to 12 months and fallers from 12 to 24 months. The association of significant principal components (PC) with an increased risk of first fall was then evaluated using the area under the Receiver Operator Characteristic Curve (ROC). No effect of clinical confounding variables was shown as a function of groups. An eigenvalue decomposition of the correlation matrix identified a large statistical PC1 (termed "Global kinetics of gait pattern"), which accounted for 36.7% of total variance. Principal component loadings also revealed a PC2 (12.6% of total variance), related to the "Global gait regularity." Subsequent ANOVAs showed that only PC1 discriminated the fall status during the first 6 months, while PC2 discriminated the first fall onset between 6 and 12 months. After one year, any PC was associated with falls. These results were bolstered by the ROC analyses, showing good predictive models of the first fall during the first six months or from 6 to 12 months. Overall, these findings suggest that the performance of a standardized walking test at least once a year is essential for fall prevention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study tests the relationships between the three frequently used personality models evaluated by the Temperament Character Inventory-Revised (TCI-R), Neuroticism Extraversion Openness Five Factor Inventory – Revised (NEO-FFI-R) and Zuckerman-Kuhlman Personality Questionnaire-50- Cross-Cultural (ZKPQ-50-CC). The results were obtained with a sample of 928 volunteer subjects from the general population aged between 17 and 28 years old. Frequency distributions and alpha reliabilities with the three instruments were acceptable. Correlational and factorial analyses showed that several scales in the three instruments share an appreciable amount of common variance. Five factors emerged from principal components analysis. The first factor was integrated by A (Agreeableness), Co (Cooperativeness) and Agg-Host (Aggressiveness-Hostility), with secondary loadings in C (Conscientiousness) and SD (Self-directiveness) from other factors. The second factor was composed by N (Neuroticism), N-Anx (Neuroticism-Anxiety), HA (Harm Avoidance) and SD (Self-directiveness). The third factor was integrated by Sy (Sociability), E (Extraversion), RD (Reward Dependence), ImpSS (Impulsive Sensation Seeking) and NS (novelty Seeking). The fourth factor was integrated by Ps (Persistence), Act (Activity), and C, whereas the fifth and last factor was composed by O (Openness) and ST (Self- Transcendence). Confirmatory factor analyses indicate that the scales in each model are highly interrelated and define the specified latent dimension well. Similarities and differences between these three instruments are further discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal transduction systems mediate the response and adaptation of organisms to environmental changes. In prokaryotes, this signal transduction is often done through Two Component Systems (TCS). These TCS are phosphotransfer protein cascades, and in their prototypical form they are composed by a kinase that senses the environmental signals (SK) and by a response regulator (RR) that regulates the cellular response. This basic motif can be modified by the addition of a third protein that interacts either with the SK or the RR in a way that could change the dynamic response of the TCS module. In this work we aim at understanding the effect of such an additional protein (which we call ‘‘third component’’) on the functional properties of a prototypical TCS. To do so we build mathematical models of TCS with alternative designs for their interaction with that third component. These mathematical models are analyzed in order to identify the differences in dynamic behavior inherent to each design, with respect to functionally relevant properties such as sensitivity to changes in either the parameter values or the molecular concentrations, temporal responsiveness, possibility of multiple steady states, or stochastic fluctuations in the system. The differences are then correlated to the physiological requirements that impinge on the functioning of the TCS. This analysis sheds light on both, the dynamic behavior of synthetically designed TCS, and the conditions under which natural selection might favor each of the designs. We find that a third component that modulates SK activity increases the parameter space where a bistable response of the TCS module to signals is possible, if SK is monofunctional, but decreases it when the SK is bifunctional. The presence of a third component that modulates RR activity decreases the parameter space where a bistable response of the TCS module to signals is possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Spanish Barley Breeding Program is carried out by four public research organizations, located at the most representative barley growing regions of Spain. The aim of this study is to evaluate the program retrospectively, attending to: i) the progress achieved in grain yield, and ii) the extent and impact of genotype-by-environment interaction of grain yield. Grain yields and flowering dates of 349 advanced lines in generations F8, F9 and F10, plus checks, tested at 163 trials over 11 years were analized. The locations are in the provinces of Albacete, Lleida, Valladolid and Zaragoza. The data are highly unbalanced because the lines stayed at the program for a maximum of three years. Progress was estimated using relative grain yield and mixed models (REML) to homogenize the results among years and locations. There was evident progress in the program over the period studied, with increasing relative yields in each generation, and with advanced lines surpassing the checks in the last two generations, although the rate of progress was uneven across locations. The genetic gain was greater from F8 to F9 than from F9 to F10. The largest non-purely environmental component of variance was genotype-by-location-by-year, meaning that the genotype-by-location pattern was highly unpredictable. The relationship between yield and flowering time overall was weak in the locations under study at this advanced stage of the program. The program can be continued with the same structure, although measures should be taken to explore the causes of slower progress at certain locations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobiililaitteisiin tehdyt sovellukset ovat nykyään laajassa käytössä. Mobiilisovellukset tarjoavat käyttäjälleen usein tietyn ennalta määritellyn toiminnallisuuden eivätkä ne pysty mukautumaan vaihtelevaan käyttöympäristöönsä. Jos sovellus olisi tietoinen käyttöympäristöstään ja sen muutoksista, se voisi tarjota käyttäjälleen tilanteeseen sopivia ominaisuuksia. Käyttöympäristöstään tietoiset hajautetut sovellukset tarvitsevat kuitenkin huomattavasti perinteisiä sovelluksia monimutkaisemman arkkitehtuurin toimiakseen. Tässä työssä esitellään hajautetuille ja kontekstitietoisille sovelluksille tarkoitettu ohjelmistoarkkitehtuuri. Työ perustuu Oulun yliopiston CAPNET-tutkimusprojektissa kehitettyyn, mobiilisovelluksille tarkoitettuun arkkitehtuuriin. Tämän työn tarkoituksena on tarjota ratkaisuja niihin puutteisiin, jotka tulivat esille CAPNET-arkkitehtuurin kehitys- ja testausvaiheessa. Esimerkiksi arkkitehtuurin komponenttien määrittelyä tulisi tarkentaa ja ne tulisi jakaa horisontaalisiin kerroksiin niiden ominaisuuksien ja alustariippuvuuden mukaisesti. Työssä luodaan katsaus olemassa oleviin teknologioihin jotka tukevat hajautettujen ja kontekstitietoisten järjestelmien kehittämistä. Myös niiden soveltumista CAPNET-arkkitehtuuriin analysoidaan. Työssä esitellään CAPNET-arkkitehtuuri ja ehdotetaan uutta arkkitehtuuria ja komponenttien kerrosjaottelua. Ehdotuksessa arkkitehtuurin komponentit ja järjestelmän rakenne määritellään ja mallinnetaan UML-menetelmällä. Työn tuloksena on arkkitehtuurimäärittely, joka jakaa nykyisen arkkitehtuurin komponentit kerroksiin. Komponenttien rajapinnat on määritelty selkeästi ja tarkasti. Työ tarjoaa myös projektiryhmälle hyvän lähtökohdan uuden arkkitehtuurin suunnittelulle ja toteuttamiselle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työssä tutkittiin kiekkosuodattimeen liittyviä ulkoisia simulointimalleja integroidussa simulointiympäristössä. Työn tarkoituksena oli parantaa olemassa olevaa mekanistista kiekkosuodatinmallia. Malli laadittiin dynaamiseen paperiteollisuuden tarpeisiin tehtyyn simulaattoriin (APMS), jossa olevaan alkuperäiseen mekanistiseen malliin tehtiin ulkoinen lisämalli, joka käyttää hyväkseen kiekkosuodatinvalmistajan mittaustuloksia. Laitetiedon saatavuutta suodattimien käyttäjille parannettiin luomalla Internetissä sijaitsevalle palvelimelle kiekkosuodattimen laitetietomäärittelyt. Suodatinvalmistaja voi palvella asiakkaitaan viemällä laitetiedot palvelimelle ja yhdistämällä laitetiedon simulointimalliin. Tämä on mahdollista Internetin ylitse käytettävän integroidun simulointiympäristön avulla, jonka on tarkoitus kokonaisvaltaisesti yhdistää simulointi ja prosessisuunnittelu. Suunnittelijalle tarjotaan työkalut, joilla dynaaminen simulointi, tasesimulointi ja kaavioiden piirtäminen onnistuu prosessilaitetiedon ollessa saatavilla. Nämä työkalut on tarkoitus toteuttaa projektissa nimeltä Galleria, jossa luodaan prosessimalli- ja laitetietopalvelin Internetiin. Gallerian käyttöliittymän avulla prosessisuunnittelija voi käyttää erilaisia simulointiohjelmistoja ja niihin luotuja valmiita malleja, sekä saada käsiinsä ajan tasalla olevaa laitetietoa. Ulkoinen kiekkosuodatinmalli laskee suodosvirtaamat ja suodosten pitoisuudet likaiselle, kirkkaalle ja superkirkkaalle suodokselle. Mallin syöttöparametrit ovat kiekkojen pyörimisnopeus, sisään tulevan syötön pitoisuus, suotautuvuus (freeness) ja säätöparametri, jolla säädetään likaisen ja kirkkaan suodoksen keskinäinen suhde. Suotautuvuus kertoo mistä massasta on kyse. Mitä suurempi suotautuvuus on, sitä paremmin massa suodattuu ja sitä puhtaampia suodokset yleensä ovat. Mallin parametrit viritettiin regressioanalyysillä ja valmistajan palautetta apuna käyttäen. Käyttäjä voi valita haluaako hän käyttää ulkoista vai alkuperäistä mallia. Alkuperäinen malli täytyy ensin alustaa antamalla sille nominaaliset toimintapisteet virtaamille ja pitoisuuksille tietyllä pyörimisnopeudella. Ulkoisen mallin yhtälöitä voi käyttää alkuperäisen mallin alustamiseen, jos alkuperäinen malli toimii ulkoista paremmin. Ulkoista mallia voi käyttää myös ilman simulointiohjelmaa Galleria-palvelimelta käsin. Käyttäjälle avautuu näin mahdollisuus tarkastella kiekkosuodattimien parametreja ja nähdä suotautumistulokset oman työasemansa ääreltä mistä tahansa, kunhan Internetyhteys on olemassa. Työn tuloksena kiekkosuodattimien laitetiedon saatavuus käyttäjille parani ja alkuperäisen simulointimallin rajoituksia ja puutteita vähennettiin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.