942 resultados para Data Driven Modeling
Resumo:
A critical issue in brain energy metabolism is whether lactate produced within the brain by astrocytes is taken up and metabolized by neurons upon activation. Although there is ample evidence that neurons can efficiently use lactate as an energy substrate, at least in vitro, few experimental data exist to indicate that it is indeed the case in vivo. To address this question, we used a modeling approach to determine which mechanisms are necessary to explain typical brain lactate kinetics observed upon activation. On the basis of a previously validated model that takes into account the compartmentalization of energy metabolism, we developed a mathematical model of brain lactate kinetics, which was applied to published data describing the changes in extracellular lactate levels upon activation. Results show that the initial dip in the extracellular lactate concentration observed at the onset of stimulation can only be satisfactorily explained by a rapid uptake within an intraparenchymal cellular compartment. In contrast, neither blood flow increase, nor extracellular pH variation can be major causes of the lactate initial dip, whereas tissue lactate diffusion only tends to reduce its amplitude. The kinetic properties of monocarboxylate transporter isoforms strongly suggest that neurons represent the most likely compartment for activation-induced lactate uptake and that neuronal lactate utilization occurring early after activation onset is responsible for the initial dip in brain lactate levels observed in both animals and humans.
Resumo:
In order to contribute to the debate about southern glacial refugia used by temperate species and more northern refugia used by boreal or cold-temperate species, we examined the phylogeography of a widespread snake species (Vipera berus) inhabiting Europe up to the Arctic Circle. The analysis of the mitochondrial DNA (mtDNA) sequence variation in 1043 bp of the cytochrome b gene and in 918 bp of the noncoding control region was performed with phylogenetic approaches. Our results suggest that both the duplicated control region and cytochrome b evolve at a similar rate in this species. Phylogenetic analysis showed that V. berus is divided into three major mitochondrial lineages, probably resulting from an Italian, a Balkan and a Northern (from France to Russia) refugial area in Eastern Europe, near the Carpathian Mountains. In addition, the Northern clade presents an important substructure, suggesting two sequential colonization events in Europe. First, the continent was colonized from the three main refugial areas mentioned above during the Lower-Mid Pleistocene. Second, recolonization of most of Europe most likely originated from several refugia located outside of the Mediterranean peninsulas (Carpathian region, east of the Carpathians, France and possibly Hungary) during the Mid-Late Pleistocene, while populations within the Italian and Balkan Peninsulas fluctuated only slightly in distribution range, with larger lowland populations during glacial times and with refugial mountain populations during interglacials, as in the present time. The phylogeographical structure revealed in our study suggests complex recolonization dynamics of the European continent by V. berus, characterized by latitudinal as well as altitudinal range shifts, driven by both climatic changes and competition with related species.
Resumo:
This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields
Resumo:
Self-potential (SP) data are of interest to vadose zone hydrology because of their direct sensitivity to water flow and ionic transport. There is unfortunately little consensus in the literature about how to best model SP data under partially saturated conditions, and different approaches (often supported by one laboratory data set alone) have been proposed. We argue that this lack of agreement can largely be traced to electrode effects that have not been properly taken into account. A series of drainage and imbibition experiments were considered in which we found that previously proposed approaches to remove electrode effects were unlikely to provide adequate corrections. Instead, we explicitly modeled the electrode effects together with classical SP contributions using a flow and transport model. The simulated data agreed overall with the observed SP signals and allowed decomposing the different signal contributions to analyze them separately. After reviewing other published experimental data, we suggest that most of them include electrode effects that have not been properly taken into account. Our results suggest that previously presented SP theory works well when considering the modeling uncertainties presently associated with electrode effects. Additional work is warranted to not only develop suitable electrodes for laboratory experiments but also to assure that associated electrode effects that appear inevitable in longer term experiments are predictable, so that they can be incorporated into the modeling framework.
Resumo:
BACKGROUND: Metals are known endocrine disruptors and have been linked to cardiometabolic diseases via multiple potential mechanisms, yet few human studies have both the exposure variability and biologically-relevant phenotype data available. We sought to examine the distribution of metals exposure and potential associations with cardiometabolic risk factors in the "Modeling the Epidemiologic Transition Study" (METS), a prospective cohort study designed to assess energy balance and change in body weight, diabetes and cardiovascular disease risk in five countries at different stages of social and economic development. METHODS: Young adults (25-45 years) of African descent were enrolled (N = 500 from each site) in: Ghana, South Africa, Seychelles, Jamaica and the U.S.A. We randomly selected 150 blood samples (N = 30 from each site) to determine concentrations of selected metals (arsenic, cadmium, lead, mercury) in a subset of participants at baseline and to examine associations with cardiometabolic risk factors. RESULTS: Median (interquartile range) metal concentrations (μg/L) were: arsenic 8.5 (7.7); cadmium 0.01 (0.8); lead 16.6 (16.1); and mercury 1.5 (5.0). There were significant differences in metals concentrations by: site location, paid employment status, education, marital status, smoking, alcohol use, and fish intake. After adjusting for these covariates plus age and sex, arsenic (OR 4.1, 95% C.I. 1.2, 14.6) and lead (OR 4.0, 95% C.I. 1.6, 9.6) above the median values were significantly associated with elevated fasting glucose. These associations increased when models were further adjusted for percent body fat: arsenic (OR 5.6, 95% C.I. 1.5, 21.2) and lead (OR 5.0, 95% C.I. 2.0, 12.7). Cadmium and mercury were also related with increased odds of elevated fasting glucose, but the associations were not statistically significant. Arsenic was significantly associated with increased odds of low HDL cholesterol both with (OR 8.0, 95% C.I. 1.8, 35.0) and without (OR 5.9, 95% C.I. 1.5, 23.1) adjustment for percent body fat. CONCLUSIONS: While not consistent for all cardiometabolic disease markers, these results are suggestive of potentially important associations between metals exposure and cardiometabolic risk. Future studies will examine these associations in the larger cohort over time.
Resumo:
The work described in this report documents the activities performed for the evaluation, development, and enhancement of the Iowa Department of Transportation (DOT) pavement condition information as part of their pavement management system operation. The study covers all of the Iowa DOT’s interstate and primary National Highway System (NHS) and non-NHS system. A new pavement condition rating system that provides a consistent, unified approach in rating pavements in Iowa is being proposed. The proposed 100-scale system is based on five individual indices derived from specific distress data and pavement properties, and an overall pavement condition index, PCI-2, that combines individual indices using weighting factors. The different indices cover cracking, ride, rutting, faulting, and friction. The Cracking Index is formed by combining cracking data (transverse, longitudinal, wheel-path, and alligator cracking indices). Ride, rutting, and faulting indices utilize the International Roughness Index (IRI), rut depth, and fault height, respectively.
Resumo:
Hydrologic analysis is a critical part of transportation design because it helps ensure that hydraulic structures are able to accommodate the flow regimes they are likely to see. This analysis is currently conducted using computer simulations of water flow patterns, and continuing developments in elevation survey techniques result in higher and higher resolution surveys. Current survey techniques now resolve many natural and anthropogenic features that were not practical to map and, thus, require new methods for dealing with depressions and flow discontinuities. A method for depressional analysis is proposed that uses the fact that most anthropogenically constructed embankments are roughly more symmetrical with greater slopes than natural depressions. An enforcement method for draining depressions is then analyzed on those depressions that should be drained. This procedure has been evaluated on a small watershed in central Iowa, Walnut Creek of the South Skunk River, HUC12 # 070801050901, and was found to accurately identify 88 of 92 drained depressions and place enforcements within two pixels, although the method often tries to drain prairie pothole depressions that are bisected by anthropogenic features.
Resumo:
In work-zone configurations where lane drops are present, merging of traffic at the taper presents an operational concern. In addition, as flow through the work zone is reduced, the relative traffic safety of the work zone is also reduced. Improving work-zone flow-through merge points depends on the behavior of individual drivers. By better understanding driver behavior, traffic control plans, work zone policies, and countermeasures can be better targeted to reinforce desirable lane closure merging behavior, leading to both improved safety and work-zone capacity. The researchers collected data for two work-zone scenarios that included lane drops with one scenario on the Interstate and the other on an urban arterial roadway. The researchers then modeled and calibrated these scenarios in VISSIM using real-world speeds, travel times, queue lengths, and merging behaviors (percentage of vehicles merging upstream and near the merge point). Once built and calibrated, the researchers modeled strategies for various countermeasures in the two work zones. The models were then used to test and evaluate how various merging strategies affect safety and operations at the merge areas in these two work zones.
Resumo:
Surface geological mapping, laboratory measurements of rock properties, and seismic reflection data are integrated through three-dimensional seismic modeling to determine the likely cause of upper crustal reflections and to elucidate the deep structure of the Penninic Alps in eastern Switzerland. Results indicate that the principal upper crustal reflections recorded on the south end of Swiss seismic line NFP20-EAST can be explained by the subsurface geometry of stacked basement nappes. In addition, modeling results provide improvements to structural maps based solely on surface trends and suggest the presence of previously unrecognized rock units in the subsurface. Construction of the initial model is based upon extrapolation of plunging surface. structures; velocities and densities are established by laboratory measurements of corresponding rock units. Iterative modification produces a best fit model that refines the definition of the subsurface geometry of major structures. We conclude that most reflections from the upper 20 km can be ascribed to the presence of sedimentary cover rocks (especially carbonates) and ophiolites juxtaposed against crystalline basement nappes. Thus, in this area, reflections appear to be principally due to first-order lithologic contrasts. This study also demonstrates not only the importance of three-dimensional effects (sideswipe) in interpreting seismic data, but also that these effects can be considered quantitatively through three-dimensional modeling.
Resumo:
The velocity of a liquid slug falling in a capillary tube is lower than predicted for Poiseuille flow due to presence of menisci, whose shapes are determined by the complex interplay of capillary, viscous, and gravitational forces. Due to the presence of menisci, a capillary pressure proportional to surface curvature acts on the slug and streamlines are bent close to the interface, resulting in enhanced viscous dissipation at the wedges. To determine the origin of drag-force increase relative to Poiseuille flow, we compute the force resultant acting on the slug by integrating Navier-Stokes equations over the liquid volume. Invoking relationships from differential geometry we demonstrate that the additional drag is due to viscous forces only and that no capillary drag of hydrodynamic origin exists (i.e., due to hydrodynamic deformation of the interface). Requiring that the force resultant is zero, we derive scaling laws for the steady velocity in the limit of small capillary numbers by estimating the leading order viscous dissipation in the different regions of the slug (i.e., the unperturbed Poiseuille-like bulk, the static menisci close to the tube axis and the dynamic regions close to the contact lines). Considering both partial and complete wetting, we find that the relationship between dimensionless velocity and weight is, in general, nonlinear. Whereas the relationship obtained for complete-wetting conditions is found in agreement with the experimental data of Bico and Quere [J. Bico and D. Quere, J. Colloid Interface Sci. 243, 262 (2001)], the scaling law under partial-wetting conditions is validated by numerical simulations performed with the Volume of Fluid method. The simulated steady velocities agree with the behavior predicted by the theoretical scaling laws in presence and in absence of static contact angle hysteresis. The numerical simulations suggest that wedge-flow dissipation alone cannot account for the entire additional drag and that the non-Poiseuille dissipation in the static menisci (not considered in previous studies) has to be considered for large contact angles.
Resumo:
For patients with chronic lung diseases, such as chronic obstructive pulmonary disease (COPD), exacerbations are life-threatening events causing acute respiratory distress that can even lead to hospitalization and death. Although a great deal of effort has been put into research of exacerbations and potential treatment options, the exact underlying mechanisms are yet to be deciphered and no therapy that effectively targets the excessive inflammation is available. In this study, we report that interleukin-1β (IL-1β) and interleukin-17A (IL-17A) are key mediators of neutrophilic inflammation in influenza-induced exacerbations of chronic lung inflammation. Using a mouse model of disease, our data shows a role for IL-1β in mediating lung dysfunction, and in driving neutrophilic inflammation during the whole phase of viral infection. We further report a role for IL-17A as a mediator of IL-1β induced neutrophilia at early time points during influenza-induced exacerbations. Blocking of IL-17A or IL-1 resulted in a significant abrogation of neutrophil recruitment to the airways in the initial phase of infection or at the peak of viral replication, respectively. Therefore, IL-17A and IL-1β are potential targets for therapeutic treatment of viral exacerbations of chronic lung inflammation.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Connectivity among populations plays a crucial role in maintaining genetic variation at a local scale, especially in small populations affected strongly by genetic drift. The negative consequences of population disconnection on allelic richness and gene diversity (heterozygosity) are well recognized and empirically established. It is not well recognized, however, that a sudden drop in local effective population size induced by such disconnection produces a temporary disequilibrium in allelic frequency distributions that is akin to the genetic signature of a demographic bottleneck. To document this effect, we used individual-based simulations and empirical data on allelic richness and gene diversity in six pairs of isolated versus well-connected (core) populations of European tree frogs. In our simulations, population disconnection depressed allelic richness more than heterozygosity and thus resulted in a temporary excess in gene diversity relative to mutation drift equilibrium (i.e., signature of a genetic bottleneck). We observed a similar excess in gene diversity in isolated populations of tree frogs. Our results show that population disconnection can create a genetic bottleneck in the absence of demographic collapse.
Resumo:
US Geological Survey (USGS) based elevation data are the most commonly used data source for highway hydraulic analysis; however, due to the vertical accuracy of USGS-based elevation data, USGS data may be too “coarse” to adequately describe surface profiles of watershed areas or drainage patterns. Additionally hydraulic design requires delineation of much smaller drainage areas (watersheds) than other hydrologic applications, such as environmental, ecological, and water resource management. This research study investigated whether higher resolution LIDAR based surface models would provide better delineation of watersheds and drainage patterns as compared to surface models created from standard USGS-based elevation data. Differences in runoff values were the metric used to compare the data sets. The two data sets were compared for a pilot study area along the Iowa 1 corridor between Iowa City and Mount Vernon. Given the limited breadth of the analysis corridor, areas of particular emphasis were the location of drainage area boundaries and flow patterns parallel to and intersecting the road cross section. Traditional highway hydrology does not appear to be significantly impacted, or benefited, by the increased terrain detail that LIDAR provided for the study area. In fact, hydrologic outputs, such as streams and watersheds, may be too sensitive to the increased horizontal resolution and/or errors in the data set. However, a true comparison of LIDAR and USGS-based data sets of equal size and encompassing entire drainage areas could not be performed in this study. Differences may also result in areas with much steeper slopes or significant changes in terrain. LIDAR may provide possibly valuable detail in areas of modified terrain, such as roads. Better representations of channel and terrain detail in the vicinity of the roadway may be useful in modeling problem drainage areas and evaluating structural surety during and after significant storm events. Furthermore, LIDAR may be used to verify the intended/expected drainage patterns at newly constructed highways. LIDAR will likely provide the greatest benefit for highway projects in flood plains and areas with relatively flat terrain where slight changes in terrain may have a significant impact on drainage patterns.
Resumo:
Based on provious (Hemelrijk 1998; Puga-González, Hildenbrant & Hemelrijk 2009), we have developed an agent-based model and software, called A-KinGDom, which allows us to simulate the emergence of the social structure in a group of non-human primates. The model includes dominance and affiliative interactions and incorporate s two main innovations (preliminary dominance interactions and a kinship factor), which allow us to define four different attack and affiliative strategies. In accordance with these strategies, we compared the data obtained under four simulation conditions with the results obtained in a provious study (Dolado & Beltran 2012) involving empirical observations of a captive group of mangabeys (Cercocebus torquatus)