942 resultados para Climatic data simulation
Resumo:
A previous study sponsored by the Smart Work Zone Deployment Initiative, “Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility,” demonstrated the feasibility of combining readily available, inexpensive software programs, such as SketchUp and Google Earth, with standard two-dimensional civil engineering design programs, such as MicroStation, to create animations of construction work zones. The animations reflect changes in work zone configurations as the project progresses, representing an opportunity to visually present complex information to drivers, construction workers, agency personnel, and the general public. The purpose of this study is to continue the work from the previous study to determine the added value and resource demands created by including more complex data, specifically traffic volume, movement, and vehicle type. This report describes the changes that were made to the simulation, including incorporating additional data and converting the simulation from a desktop application to a web application.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
Mathematical models have great potential to support land use planning, with the goal of improving water and land quality. Before using a model, however, the model must demonstrate that it can correctly simulate the hydrological and erosive processes of a given site. The SWAT model (Soil and Water Assessment Tool) was developed in the United States to evaluate the effects of conservation agriculture on hydrological processes and water quality at the watershed scale. This model was initially proposed for use without calibration, which would eliminate the need for measured hydro-sedimentologic data. In this study, the SWAT model was evaluated in a small rural watershed (1.19 km²) located on the basalt slopes of the state of Rio Grande do Sul in southern Brazil, where farmers have been using cover crops associated with minimum tillage to control soil erosion. Values simulated by the model were compared with measured hydro-sedimentological data. Results for surface and total runoff on a daily basis were considered unsatisfactory (Nash-Sutcliffe efficiency coefficient - NSE < 0.5). However simulation results on monthly and annual scales were significantly better. With regard to the erosion process, the simulated sediment yields for all years of the study were unsatisfactory in comparison with the observed values on a daily and monthly basis (NSE values < -6), and overestimated the annual sediment yield by more than 100 %.
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
A Monte Carlo procedure to simulate the penetration and energy loss of low¿energy electron beams through solids is presented. Elastic collisions are described by using the method of partial waves for the screened Coulomb field of the nucleus. The atomic charge density is approximated by an analytical expression with parameters determined from the Dirac¿Hartree¿Fock¿Slater self¿consistent density obtained under Wigner¿Seitz boundary conditions in order to account for solid¿state effects; exchange effects are also accounted for by an energy¿dependent local correction. Elastic differential cross sections are then easily computed by combining the WKB and Born approximations to evaluate the phase shifts. Inelastic collisions are treated on the basis of a generalized oscillator strength model which gives inelastic mean free paths and stopping powers in good agreement with experimental data. This scattering model is accurate in the energy range from a few hundred eV up to about 50 keV. The reliability of the simulation method is analyzed by comparing simulation results and experimental data from backscattering and transmission measurements.
Resumo:
The likelihood of significant exposure to drugs in infants through breast milk is poorly defined, given the difficulties of conducting pharmacokinetics (PK) studies. Using fluoxetine (FX) as an example, we conducted a proof-of-principle study applying population PK (popPK) modeling and simulation to estimate drug exposure in infants through breast milk. We simulated data for 1,000 mother-infant pairs, assuming conservatively that the FX clearance in an infant is 20% of the allometrically adjusted value in adults. The model-generated estimate of the milk-to-plasma ratio for FX (mean: 0.59) was consistent with those reported in other studies. The median infant-to-mother ratio of FX steady-state plasma concentrations predicted by the simulation was 8.5%. Although the disposition of the active metabolite, norfluoxetine, could not be modeled, popPK-informed simulation may be valid for other drugs, particularly those without active metabolites, thereby providing a practical alternative to conventional PK studies for exposure risk assessment in this population.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
Both structural and dynamical properties of 7Li at 470 and 843 K are studied by molecular dynamics simulation and the results are comapred with the available experimental data. Two effective interatomic potentials are used, i.e., a potential derived from the Ashcroft pseudopotential [Phys. Lett. 23, 48 (1966)] and a recently proposed potential deduced from the neutral pseudoatom method [J. Phys.: Condens. Matter 5, 4283 (1993)]. Although the shape of the two potential functions is very different, the majority of the properties calculated from them are very similar. The differences among the results using the two interaction models are carefully discussed.
Resumo:
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for rapid processing of the FWD data along with a user manual. The software system automatically reads the FWD raw data collected by the JILS-20 type FWD machine that Iowa DOT owns, processes and analyzes the collected data with the rapid prediction algorithms developed during the phase I study. This system smoothly integrates the FWD data analysis algorithms and the computer program being used to collect the pavement deflection data. This system can be used to assess pavement condition, estimate remaining pavement life, and eventually help assess pavement rehabilitation strategies by the Iowa DOT pavement management team. This report describes the developed software in detail and can also be used as a user-manual for conducting simulation studies and detailed analyses. *********************** Large File ***********************
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
The present study proposes a modification in one of the most frequently applied effect size procedures in single-case data analysis the percent of nonoverlapping data. In contrast to other techniques, the calculus and interpretation of this procedure is straightforward and it can be easily complemented by visual inspection of the graphed data. Although the percent of nonoverlapping data has been found to perform reasonably well in N = 1 data, the magnitude of effect estimates it yields can be distorted by trend and autocorrelation. Therefore, the data correction procedure focuses on removing the baseline trend from data prior to estimating the change produced in the behavior due to intervention. A simulation study is carried out in order to compare the original and the modified procedures in several experimental conditions. The results suggest that the new proposal is unaffected by trend and autocorrelation and can be used in case of unstable baselines and sequentially related measurements.
Resumo:
In order to contribute to the debate about southern glacial refugia used by temperate species and more northern refugia used by boreal or cold-temperate species, we examined the phylogeography of a widespread snake species (Vipera berus) inhabiting Europe up to the Arctic Circle. The analysis of the mitochondrial DNA (mtDNA) sequence variation in 1043 bp of the cytochrome b gene and in 918 bp of the noncoding control region was performed with phylogenetic approaches. Our results suggest that both the duplicated control region and cytochrome b evolve at a similar rate in this species. Phylogenetic analysis showed that V. berus is divided into three major mitochondrial lineages, probably resulting from an Italian, a Balkan and a Northern (from France to Russia) refugial area in Eastern Europe, near the Carpathian Mountains. In addition, the Northern clade presents an important substructure, suggesting two sequential colonization events in Europe. First, the continent was colonized from the three main refugial areas mentioned above during the Lower-Mid Pleistocene. Second, recolonization of most of Europe most likely originated from several refugia located outside of the Mediterranean peninsulas (Carpathian region, east of the Carpathians, France and possibly Hungary) during the Mid-Late Pleistocene, while populations within the Italian and Balkan Peninsulas fluctuated only slightly in distribution range, with larger lowland populations during glacial times and with refugial mountain populations during interglacials, as in the present time. The phylogeographical structure revealed in our study suggests complex recolonization dynamics of the European continent by V. berus, characterized by latitudinal as well as altitudinal range shifts, driven by both climatic changes and competition with related species.
Resumo:
According to Jenkyns (2010), oceanic anoxic events (OAE) record profound changes in the climatic and paleoceanographic state of the planet and represent major disturbances in the global carbon cycle. One of the most studied OAEs on a worldwide scale is the Cenomanian-Turonian OAE 2, which is characterized by a pronounced positive excursion in carbon-isotope records and the important accumulation of organic-rich sediments. The section at Gongzha (Tibet) and the sections at Barranca and Axaxacualco (Mexico) are located in remote parts of the Tethys, and show δ13C records, which are well correlated with those of classical Tethyan sections. Both sections, however, do not exhibit the presence of organic-rich sediments. Phosphorus Mass Accumulation Rates (PMAR) in Tibet show a pattern similar to that observed in the Tethys by Mort et al. (2007), which suggests enhanced Ρ regeneration during the OAE 2 time interval, though there is no evidence for anoxic conditions in Tibet. Ρ appears here to have been mainly driven by detrital influx and sea-level fluctuations. The sections at Barranca and Axaxacualco show that the Mexican carbonate platform persisted during this anoxic event, which allowed the evolution of platform fauna otherwise not present in Tethyan sections. The persistence of this carbonate platform close to the Caribbean Igneous Plateau, which is thought to have released bio-limiting metals, is explained by local uplift which delayed the drowning of the platform and a specific oceanic circulation that permitted the preservation of oligotrophic conditions in the area. The Coniacian-Santonian OAE (OAE3) appears to have been more dependent on local conditions than OAE2. The presence of black shales associated with OAE3 appear to have been restricted to shallow-water settings and epicontinental seas in areas located around the Atlantic Ocean. The sections at Olazagutia (Spain), and Ten Mile - Arbor Park (USA), two potential Global Boundary Stratotype Sections and Points (GSSP) sites, are devoid of organic-rich sediments and lack a δ13C positive excursion around the C-S boundary. The Gabal Ekma section (Sinai, Egypt) exhibits accumulations of organic-rich sediments, in addition to phosphorite bone beds layers, which may have been linked to an epicontinental upwelling zone and/or storm inputs. Our data suggest that OAE 3 is rarely expressed by truly anoxic conditions and seems to have been linked to local conditions rather than global paleoenvironmental change. The evidence for detrital-P being the likely cause of Ρ fluctuations during the OAEs studied here does not negate the idea that anoxia was the principal driver of these fluctuations in the western Tethys. However, an explanation is required as to why the Ρ accumulation signatures are mirrored in both oxic and anoxic sedimentary successions. 'Eustatic/climatic' and 'productivity/anoxic' models may have both operated simultaneously in different parts of the world depending on local conditions, both producing similar trends in Ρ accumulation. - Selon Jenkyns (2010), les événements anoxiques océaniques enregistrent de profonds changements dans le climat et la paléoceanographie de la planète et représente des perturbations majeures du cycle du carbone. L'un des plus étudiés à l'échelle mondiale est l'ΟΑΕ2 du Cénomanien-Turonien, qui est caractérisé par une très forte excursion positive des isotopes du carbone et une importante accumulation de sédiments riche en matière organique. La section de Gongzha (Tibet) et les sections de Barranca et Axaxcualco (Mexique) sont situées aux confins de la Téthys, et enregistrent une courbe isotopique en δ13C parfaitement corrélable avec les sections téthysiennes, mais ne montre pas d'accumulation de black shales. Le taux de phosphore en accumulation de masses (PMAR) au Tibet montre un pattern similaire observé également par Mort et al. (2007) dans la Téthys, suggérant un model de régénération du Ρ durant l'anoxie, cependant aucune conditions anoxiques régnent dans la région du Tibet. Ρ apparaît donc principalement guidé par le détritisme et les fluctuations du niveau marin. Les sections de Barranca et d'Axaxacualco montrent que la plateforme carbonatée mexicaine persiste durant cet événement anoxique, et permet le développement d'une faune de plateforme qui n'est pas présente dans les sections téthysiennes. La persistance de cette plateforme carbonatée si proche du plateau Caribéen, qui est connu pour le relâchement de métaux bio-limitant, peut être expliqué par un soulèvement tectonique local qui inhibe l'ennoiement de la plateforme et une circulation océanique spécifique qui permet la préservation de conditions oligotrophiques dans cette région. L'événement anoxique océanique du Coniacien-Santonien apparaît plus dépendant des conditions locales que pour l'ΟΑΕ2. Les black shales associés à POAE3 sont restreints aux zones situées autour de l'océan Atlantique et plus particulièrement aux eaux peu profondes et épicontinentales. Les sections d'Olazagutia (Espagne), Ten Mile Creek et Arbor Park (USA), qui sont deux potentielles sections GSSP (Sections de stratotype de limite globaux et de points), ne montre pas d'accumulation de black shales et pas de forte excursion positive en δ13C autour de la limite C-S. La section de Gabal Ekma (Sinai, Egypte) montre des accumulations de black shales, en plus des couches de phosphorites et d'accumulation d'os (« bone beds »), vraisemblablement lié à des zones active d'upwelling épicontinentale et/ou d'apport de tempêtes. Nos données suggèrent que l'OAE 3 est rarement exprimé par de vraies conditions anoxiques et semble être plus lié à des conditions plus locales que des changements paléo-environnementaux globaux, comme observés pour le Cénomanien- Turonien. Les arguments pour un modèle lié au phosphore détritique qui serait la cause des fluctuations du phosphore total durant les OAEs, n'écartent pas l'idée que l'anoxie est la principale cause de ces fluctuations dans les sections riches en matière organique de l'Ouest téthysien. Cependant une explication est nécessaire pour comprendre pourquoi la signature de l'accumulation du phosphore est semblable dans les successions sédimentaires déposées dans des conditions oxygénées et anoxiques. Les modèles « Eustatisme/Climat » et « Productivité/anoxie » ont simultanément opéré dans les différentes parties du monde dépendant de conditions locales, et ont produit des tendances similaires en accumulation de phosphore.
Resumo:
The objective of this work was to adapt the CROPGRO model, which is part of the DSSAT system, for simulating the cowpea (Vigna unguiculata) growth and development under soil and climate conditions of the Baixo Parnaíba region, Piauí State, Brazil. In the CROPGRO, only input parameters that define crop species, cultivars, and ecotype were changed in order to characterize the cowpea crop. Soil and climate files were created for the considered site. Field experiments without water deficit were used to calibrate the model. In these experiments, dry matter (DM), leaf area index (LAI), yield components and grain yield of cowpea (cv. BR 14 Mulato) were evaluated. The results showed good fit for DM and LAI estimates. The medium values of R² and medium absolute error (MAE) were, respectively, 0.95 and 264.9 kg ha-1 for DM, and 0.97 and 0.22 for LAI. The difference between observed and simulated values of plant phenology varied from 0 to 3 days. The model also presented good performance for yield components simulation, excluding 100-grain weight, for which the error ranged from 20.9% to 34.3%. Considering the medium values of crop yield in two years, the model presented an error from 5.6%.