953 resultados para Hierarchical Linear Modelling
Resumo:
This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Context There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. Objective To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Study selection Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Data extraction Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Results and conclusions Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10; diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs 0.7 weeks; diastolic 1.4 vs 0.7 weeks). Conclusions Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.
Resumo:
Soil penetration resistance is an important property that affects root growth and elongation and water movement in the soil. Since no-till systems tend to increase organic matter in the soil, the purpose of this study was to evaluate the efficiency with which soil penetration resistance is estimated using a proposed model based on moisture content, density and organic matter content in an Oxisol containing 665, 221 and 114 g kg-1 of clay, silt and sand respectively under annual no-till cropping, located in Londrina, Paraná State, Brazil. Penetration resistance was evaluated at random locations continually from May 2008 to February 2011, using an impact penetrometer to obtain a total of 960 replications. For the measurements, soil was sampled at depths of 0 to 20 cm to determine gravimetric moisture (G), bulk density (D) and organic matter content (M). The penetration resistance curve (PR) was adjusted using two non-linear models (PR = a Db Gc and PR' = a Db Gc Md), where a, b, c and d are coefficients of the adjusted model. It was found that the model that included M was the most efficient for estimating PR, explaining 91 % of PR variability, compared to 82 % of the other model.
Resumo:
Aims: To assess the potential distribution of an obligate seeder and active pyrophyte, Cistus salviifolius, a vulnerable species in the Swiss Red List; to derive scenarios by changing the fire return interval; and to discuss the results from a conservation perspective. A more general aim is to assess the impact of fire as a natural factor influencing the vegetation of the southern slopes of the Alps. Locations: Alps, southern Switzerland. Methods: Presence-absence data to fit the model were obtained from the most recent field mapping of C. salviifolius. The quantitative environmental predictors used in this study include topographic, climatic and disturbance (fire) predictors. Models were fitted by logistic regression and evaluated by jackknife and bootstrap approaches. Changes in fire regime were simulated by increasing the time-return interval of fire (simulating longer periods without fire). Two scenarios were considered: no fire in the past 15 years; or in the past 35 years. Results: Rock cover, slope, topographic position, potential evapotranspiration and time elapsed since the last fire were selected in the final model. The Nagelkerke R-2 of the model for C. salviifolius was 0.57 and the Jackknife area under the curve evaluation was 0.89. The bootstrap evaluation revealed model robustness. By increasing the return interval of fire by either up to 15 years, or 35 years, the modelled C. salviifolius population declined by 30-40%, respectively. Main conclusions: Although fire plays a significant role, topography and rock cover appear to be the most important predictors, suggesting that the distribution of C. salviifolius in the southern Swiss Alps is closely related to the availability of supposedly competition-free sites, such as emerging bedrock, ridge locations or steep slopes. Fire is more likely to play a secondary role in allowing C. salviifolius to extend its occurrence temporarily, by increasing germination rates and reducing the competition from surrounding vegetation. To maintain a viable dormant seed bank for C. salviifolius, conservation managers should consider carrying out vegetation clearing and managing wild fire propagation to reduce competition and ensure sufficient recruitment for this species.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
Electrical Impedance Tomography (EIT) is an imaging method which enables a volume conductivity map of a subject to be produced from multiple impedance measurements. It has the potential to become a portable non-invasive imaging technique of particular use in imaging brain function. Accurate numerical forward models may be used to improve image reconstruction but, until now, have employed an assumption of isotropic tissue conductivity. This may be expected to introduce inaccuracy, as body tissues, especially those such as white matter and the skull in head imaging, are highly anisotropic. The purpose of this study was, for the first time, to develop a method for incorporating anisotropy in a forward numerical model for EIT of the head and assess the resulting improvement in image quality in the case of linear reconstruction of one example of the human head. A realistic Finite Element Model (FEM) of an adult human head with segments for the scalp, skull, CSF, and brain was produced from a structural MRI. Anisotropy of the brain was estimated from a diffusion tensor-MRI of the same subject and anisotropy of the skull was approximated from the structural information. A method for incorporation of anisotropy in the forward model and its use in image reconstruction was produced. The improvement in reconstructed image quality was assessed in computer simulation by producing forward data, and then linear reconstruction using a sensitivity matrix approach. The mean boundary data difference between anisotropic and isotropic forward models for a reference conductivity was 50%. Use of the correct anisotropic FEM in image reconstruction, as opposed to an isotropic one, corrected an error of 24 mm in imaging a 10% conductivity decrease located in the hippocampus, improved localisation for conductivity changes deep in the brain and due to epilepsy by 4-17 mm, and, overall, led to a substantial improvement on image quality. This suggests that incorporation of anisotropy in numerical models used for image reconstruction is likely to improve EIT image quality.
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
This study was carried to develop functions that could explain the growth of Oxalis latifolia, in both early stages and throughout the season, contributing to the improvement of its cultural control. Bulbs of the Cornwall form of O. latifolia were buried at 1 and 8 cm in March 1999 and 2000. Samples were destructive at fixed times, and at each time the corresponding BBCH scale codes as well as the absolute number of growing and adult leaves were noted. Using the absolute number of adult leaves (transformed to percentages), a Gaussian curve of three parameters that explains the growth during the season (R2=0.9355) was developed. The BBCH scale permitted the fit of two regression lines that were accurately adjusted for each burial depth (R2=0.9969 and R2=0.9930 respectively for 1 and 8 cm). The best moment for an early defoliation in Northern Spain could be calculated with these regression lines, and was found to be the second week of May. In addition, it was observed that a burial depth of 8 cm does not affect the growing pattern of the weed, but it affects the number of leaves they produce, which decreases to less than a half of those produced at 1 cm.
Resumo:
The current challenge in a context of major environmental changes is to anticipate the responses of species to future landscape and climate scenarios. In the Mediterranean basin, climate change is one the most powerful driving forces of fire dynamics, with fire frequency and impact having markedly increased in recent years. Species distribution modelling plays a fundamental role in this challenge, but better integration of available ecological knowledge is needed to adequately guide conservation efforts. Here, we quantified changes in habitat suitability of an early-succession bird in Catalonia, the Dartford Warbler (Sylvia undata) ― globally evaluated as Near Threatened in the IUCN Red List. We assessed potential changes in species distributions between 2000 and 2050 under different fire management and climate change scenarios and described landscape dynamics using a spatially-explicit fire-succession model that simulates fire impacts in the landscape and post-fire regeneration (MEDFIRE model). Dartford Warbler occurrence data were acquired at two different spatial scales from: 1) the Atlas of European Breeding Birds (EBCC) and 2) Catalan Breeding Bird Atlas (CBBA). Habitat suitability was modelled using five widely-used modelling techniques in an ensemble forecasting framework. Our results indicated considerable habitat suitability losses (ranging between 47% and 57% in baseline scenarios), which were modulated to a large extent by fire regime changes derived from fire management policies and climate changes. Such result highlighted the need for taking the spatial interaction between climate changes, fire-mediated landscape dynamics and fire management policies into account for coherently anticipating habitat suitability changes of early succession bird species. We conclude that fire management programs need to be integrated into conservation plans to effectively preserve sparsely forested and early succession habitats and their associated species in the face of global environmental change.
Resumo:
This paper deals with a phenomenologically motivated magneto-viscoelastic coupled finite strain framework for simulating the curing process of polymers under the application of a coupled magneto-mechanical road. Magneto-sensitive polymers are prepared by mixing micron-sized ferromagnetic particles in uncured polymers. Application of a magnetic field during the curing process causes the particles to align and form chain-like structures lending an overall anisotropy to the material. The polymer curing is a viscoelastic complex process where a transformation from fluid. to solid occurs in the course of time. During curing, volume shrinkage also occurs due to the packing of polymer chains by chemical reactions. Such reactions impart a continuous change of magneto-mechanical properties that can be modelled by an appropriate constitutive relation where the temporal evolution of material parameters is considered. To model the shrinkage during curing, a magnetic-induction-dependent approach is proposed which is based on a multiplicative decomposition of the deformation gradient into a mechanical and a magnetic-induction-dependent volume shrinkage part. The proposed model obeys the relevant laws of thermodynamics. Numerical examples, based on a generalised Mooney-Rivlin energy function, are presented to demonstrate the model capacity in the case of a magneto-viscoelastically coupled load.
Resumo:
Many three-dimensional (3-D) structures in rock, which formed during the deformation of the Earth's crust and lithosphere, are controlled by a difference in mechanical strength between rock units and are often the result of a geometrical instability. Such structures are, for example, folds, pinch-and-swell structures (due to necking) or cuspate-lobate structures (mullions). These struc-tures occur from the centimeter to the kilometer scale and the related deformation processes con-trol the formation of, for example, fold-and-thrust belts and extensional sedimentary basins or the deformation of the basement-cover interface. The 2-D deformation processes causing these structures are relatively well studied, however, several processes during large-strain 3-D defor-mation are still incompletely understood. One of these 3-D processes is the lateral propagation of these structures, such as fold and cusp propagation in a direction orthogonal to the shortening direction or neck propagation in direction orthogonal to the extension direction. Especially, we are interested in fold nappes which are recumbent folds with amplitudes usually exceeding 10 km and they have been presumably formed by ductile shearing. They often exhibit a constant sense of shearing and a non-linear increase of shear strain towards their overturned limb. The fold axes of the Morcles fold nappe in western Switzerland plunges to the ENE whereas the fold axes in the more eastern Doldenhorn nappe plunges to the WSW. These opposite plunge direc-tions characterize the Rawil depression (Wildstrubel depression). The Morcles nappe is mainly the result of layer parallel contraction and shearing. During the compression the massive lime-stones were more competent than the surrounding marls and shales, which led to the buckling characteristics of the Morcles nappe, especially in the north-dipping normal limb. The Dolden-horn nappe exhibits only a minor overturned fold limb. There are still no 3-D numerical studies which investigate the fundamental dynamics of the formation of the large-scale 3-D structure including the Morcles and Doldenhorn nappes and the related Rawil depression. We study the 3-D evolution of geometrical instabilities and fold nappe formation with numerical simulations based on the finite element method (FEM). Simulating geometrical instabilities caused by sharp variations of mechanical strength between rock units requires a numerical algorithm that can accurately resolve material interfaces for large differences in material properties (e.g. between limestone and shale) and for large deformations. Therefore, our FE algorithm combines a nu-merical contour-line technique and a deformable Lagrangian mesh with re-meshing. With this combined method it is possible to accurately follow the initial material contours with the FE mesh and to accurately resolve the geometrical instabilities. The algorithm can simulate 3-D de-formation for a visco-elastic rheology. The viscous rheology is described by a power-law flow law. The code is used to study the 3-D fold nappe formation, the lateral propagation of folding and also the lateral propagation of cusps due to initial half graben geometry. Thereby, the small initial geometrical perturbations for folding and necking are exactly followed by the FE mesh, whereas the initial large perturbation describing a half graben is defined by a contour line inter-secting the finite elements. Further, the 3-D algorithm is applied to 3-D viscous nacking during slab detachment. The results from various simulations are compared with 2-D resulats and a 1-D analytical solution. -- On retrouve beaucoup de structures en 3 dimensions (3-D) dans les roches qui ont pour origines une déformation de la lithosphère terrestre. Ces structures sont par exemple des plis, des boudins (pinch-and-swell) ou des mullions (cuspate-lobate) et sont présentés de l'échelle centimétrique à kilométrique. Mécaniquement, ces structures peuvent être expliquées par une différence de résistance entre les différentes unités de roches et sont généralement le fruit d'une instabilité géométrique. Ces différences mécaniques entre les unités contrôlent non seulement les types de structures rencontrées, mais également le type de déformation (thick skin, thin skin) et le style tectonique (bassin d'avant pays, chaîne d'avant pays). Les processus de la déformation en deux dimensions (2-D) formant ces structures sont relativement bien compris. Cependant, lorsque l'on ajoute la troisiéme dimension, plusieurs processus ne sont pas complètement compris lors de la déformation à large échelle. L'un de ces processus est la propagation latérale des structures, par exemple la propagation de plis ou de mullions dans la direction perpendiculaire à l'axe de com-pression, ou la propagation des zones d'amincissement des boudins perpendiculairement à la direction d'extension. Nous sommes particulièrement intéressés les nappes de plis qui sont des nappes de charriage en forme de plis couché d'une amplitude plurikilométrique et étant formées par cisaillement ductile. La plupart du temps, elles exposent un sens de cisaillement constant et une augmentation non linéaire de la déformation vers la base du flanc inverse. Un exemple connu de nappes de plis est le domaine Helvétique dans les Alpes de l'ouest. Une de ces nap-pes est la Nappe de Morcles dont l'axe de pli plonge E-NE tandis que de l'autre côté de la dépression du Rawil (ou dépression du Wildstrubel), la nappe du Doldenhorn (équivalent de la nappe de Morcles) possède un axe de pli plongeant O-SO. La forme particulière de ces nappes est due à l'alternance de couches calcaires mécaniquement résistantes et de couches mécanique-ment faibles constituées de schistes et de marnes. Ces différences mécaniques dans les couches permettent d'expliquer les plissements internes à la nappe, particulièrement dans le flanc inver-se de la nappe de Morcles. Il faut également noter que le développement du flanc inverse des nappes n'est pas le même des deux côtés de la dépression de Rawil. Ainsi la nappe de Morcles possède un important flanc inverse alors que la nappe du Doldenhorn en est presque dépour-vue. A l'heure actuelle, aucune étude numérique en 3-D n'a été menée afin de comprendre la dynamique fondamentale de la formation des nappes de Morcles et du Doldenhorn ainsi que la formation de la dépression de Rawil. Ce travail propose la première analyse de l'évolution 3-D des instabilités géométriques et de la formation des nappes de plis en utilisant des simulations numériques. Notre modèle est basé sur la méthode des éléments finis (FEM) qui permet de ré-soudre avec précision les interfaces entre deux matériaux ayant des propriétés mécaniques très différentes (par exemple entre les couches calcaires et les couches marneuses). De plus nous utilisons un maillage lagrangien déformable avec une fonction de re-meshing (production d'un nouveau maillage). Grâce à cette méthode combinée il nous est possible de suivre avec précisi-on les interfaces matérielles et de résoudre avec précision les instabilités géométriques lors de la déformation de matériaux visco-élastiques décrit par une rhéologie non linéaire (n>1). Nous uti-lisons cet algorithme afin de comprendre la formation des nappes de plis, la propagation latérale du plissement ainsi que la propagation latérale des structures de type mullions causé par une va-riation latérale de la géométrie (p.ex graben). De plus l'algorithme est utilisé pour comprendre la dynamique 3-D de l'amincissement visqueux et de la rupture de la plaque descendante en zone de subduction. Les résultats obtenus sont comparés à des modèles 2-D et à la solution analytique 1-D. -- Viele drei dimensionale (3-D) Strukturen, die in Gesteinen vorkommen und durch die Verfor-mung der Erdkruste und Litosphäre entstanden sind werden von den unterschiedlichen mechani-schen Eigenschaften der Gesteinseinheiten kontrolliert und sind häufig das Resulat von geome-trischen Istabilitäten. Zu diesen strukturen zählen zum Beispiel Falten, Pich-and-swell Struktu-ren oder sogenannte Cusbate-Lobate Strukturen (auch Mullions). Diese Strukturen kommen in verschiedenen Grössenordungen vor und können Masse von einigen Zentimeter bis zu einigen Kilometer aufweisen. Die mit der Entstehung dieser Strukturen verbundenen Prozesse kontrol-lieren die Entstehung von Gerbirgen und Sediment-Becken sowie die Verformung des Kontaktes zwischen Grundgebirge und Stedimenten. Die zwei dimensionalen (2-D) Verformungs-Prozesse die zu den genannten Strukturen führen sind bereits sehr gut untersucht. Einige Prozesse wäh-rend starker 3-D Verformung sind hingegen noch unvollständig verstanden. Einer dieser 3-D Prozesse ist die seitliche Fortpflanzung der beschriebenen Strukturen, so wie die seitliche Fort-pflanzung von Falten und Cusbate-Lobate Strukturen senkrecht zur Verkürzungsrichtung und die seitliche Fortpflanzung von Pinch-and-Swell Strukturen othogonal zur Streckungsrichtung. Insbesondere interessieren wir uns für Faltendecken, liegende Falten mit Amplituden von mehr als 10 km. Faltendecken entstehen vermutlich durch duktile Verscherung. Sie zeigen oft einen konstanten Scherungssinn und eine nicht-lineare zunahme der Scherverformung am überkipp-ten Schenkel. Die Faltenachsen der Morcles Decke in der Westschweiz fallen Richtung ONO während die Faltenachsen der östicher gelegenen Doldenhorn Decke gegen WSW einfallen. Diese entgegengesetzten Einfallrichtungen charakterisieren die Rawil Depression (Wildstrubel Depression). Die Morcles Decke ist überwiegend das Resultat von Verkürzung und Scherung parallel zu den Sedimentlagen. Während der Verkürzung verhielt sich der massive Kalkstein kompetenter als der Umliegende Mergel und Schiefer, was zur Verfaltetung Morcles Decke führ-te, vorallem in gegen Norden eifallenden überkippten Schenkel. Die Doldenhorn Decke weist dagegen einen viel kleineren überkippten Schenkel und eine stärkere Lokalisierung der Verfor-mung auf. Bis heute gibt es keine 3-D numerischen Studien, die die fundamentale Dynamik der Entstehung von grossen stark verformten 3-D Strukturen wie den Morcles und Doldenhorn Decken sowie der damit verbudenen Rawil Depression untersuchen. Wir betrachten die 3-D Ent-wicklung von geometrischen Instabilitäten sowie die Entstehung fon Faltendecken mit Hilfe von numerischen Simulationen basiert auf der Finite Elemente Methode (FEM). Die Simulation von geometrischen Instabilitäten, die aufgrund von Änderungen der Materialeigenschaften zwischen verschiedenen Gesteinseinheiten entstehen, erfortert einen numerischen Algorithmus, der in der Lage ist die Materialgrenzen mit starkem Kontrast der Materialeigenschaften (zum Beispiel zwi-schen Kalksteineinheiten und Mergel) für starke Verfomung genau aufzulösen. Um dem gerecht zu werden kombiniert unser FE Algorithmus eine numerische Contour-Linien-Technik und ein deformierbares Lagranges Netz mit Re-meshing. Mit dieser kombinierten Methode ist es mög-lich den anfänglichen Materialgrenzen mit dem FE Netz genau zu folgen und die geometrischen Instabilitäten genügend aufzulösen. Der Algorithmus ist in der Lage visko-elastische 3-D Ver-formung zu rechnen, wobei die viskose Rheologie mit Hilfe eines power-law Fliessgesetzes beschrieben wird. Mit dem numerischen Algorithmus untersuchen wir die Entstehung von 3-D Faltendecken, die seitliche Fortpflanzung der Faltung sowie der Cusbate-Lobate Strukturen die sich durch die Verkürzung eines mit Sediment gefüllten Halbgraben bilden. Dabei werden die anfänglichen geometrischen Instabilitäten der Faltung exakt mit dem FE Netz aufgelöst wäh-rend die Materialgranzen des Halbgrabens die Finiten Elemente durchschneidet. Desweiteren wird der 3-D Algorithmus auf die Einschnürung während der 3-D viskosen Plattenablösung und Subduktion angewandt. Die 3-D Resultate werden mit 2-D Ergebnissen und einer 1-D analyti-schen Lösung verglichen.