959 resultados para Feature scale simulation
Resumo:
Die vorliegende Arbeit ist im Zuge des DFG Projektes Spätpleistozäne, holozäne und aktuelle Geomorphodynamik in abflusslosen Becken der Mongolischen Gobi´´ entstanden. Das Arbeitsgebiet befindet sich in der südlichen Mongolei im nördlichen Teil der Wüste Gobi. Neben einigen Teilen der Sahara (Heintzenberg, 2009), beispielsweise das Bodélé Becken des nördlichen Tschads (z.B. Washington et al., 2006a; Todd et al., 2006; Warren et al., 2007) wird Zentralasien als ein Hauptliefergebiet für Partikel in die globale Zirkulation der Atmosphäre gesehen (Goudie, 2009). Hauptaugenmerk liegt hierbei besonders auf den abflusslosen Becken und deren Sedimentablagerungen. Die, der Deflation ausgesetzten Flächen der Seebecken, sind hauptsächliche Quelle für Partikel die sich in Form von Staub respektive Sand ausbreiten. Im Hinblick auf geomorphologische Landschaftsentwicklung wurde der Zusammenhang von Beckensedimenten zu Hangdepositionen numerisch simuliert. Ein von Grunert and Lehmkuhl (2004) publiziertes Model, angelehnt an Ideen von Pye (1995) wird damit in Betracht gezogen. Die vorliegenden Untersuchungen modellieren Verbreitungsmechanismen auf regionaler Ebene ausgehend von einer größeren Anzahl an einzelnen punktuellen Standorten. Diese sind repräsentativ für die einzelnen geomorphologischen Systemglieder mit möglicherweise einer Beteiligung am Budget aeolischer Geomorphodynamik. Die Bodenbedeckung durch das charakteristische Steinpflaster der Gobi - Region, sowie unter anderem Korngrößenverteilungen der Oberflächensedimente wurden untersucht. Des Weiteren diente eine zehnjährige Zeitreihe (Jan 1998 bis Dez 2007) meteorologischer Daten als Grundlage zur Analyse der Bedingungen für äolische Geomorphodynamik. Die Daten stammen von 32 staatlichen mongolischen Wetterstationen aus der Region und Teile davon wurden für die Simulationen verwendet. Zusätzlich wurden atmosphärische Messungen zur Untersuchung der atmosphärischen Stabilität und ihrer tageszeitlichen Variabilität mit Mess-Drachenaufstiegen vorgenommen. Die Feldbefunde und auch die Ergebnisse der Laboruntersuchungen sowie der Datensatz meteorologischer Parameter dienten als Eingangsparameter für die Modellierungen. Emissionsraten der einzelnen Standorte und die Partikelverteilung im 3D Windfeld wurden modelliert um die Konvektivität der Beckensedimente und Hangdepositionen zu simulieren. Im Falle hoher mechanischer Turbulenz der bodennahen Luftschicht (mit einhergehender hoher Wind Reibungsgeschwindigkeit), wurde generell eine neutrale Stabilität festgestellt und die Simulationen von Partikelemission sowie deren Ausbreitung und Deposition unter neutraler Stabilitätsbedingung berechnet. Die Berechnung der Partikelemission wurde auf der Grundlage eines sehr vereinfachten missionsmodells in Anlehnung an bestehende Untersuchungen (Laurent et al., 2006; Darmenova et al., 2009; Shao and Dong, 2006; Alfaro, 2008) durchgeführt. Sowohl 3D Windfeldkalkulationen als auch unterschiedliche Ausbreitungsszenarien äolischer Sedimente wurden mit dem kommerziellen Programm LASAT® (Lagrange-Simulation von Aerosol-Transport) realisiert. Diesem liegt ein Langargischer Algorithmus zugrunde, mittels dessen die Verbreitung einzelner Partikel im Windfeld mit statistischer Wahrscheinlichkeit berechnet wird. Über Sedimentationsparameter kann damit ein Ausbreitungsmodell der Beckensedimente in Hinblick auf die Gebirgsfußflächen und -hänge generiert werden. Ein weiterer Teil der Untersuchungen beschäftigt sich mit der geochemischen Zusammensetzung der Oberflächensedimente. Diese Proxy sollte dazu dienen die simulierten Ausbreitungsrichtungen der Partikel aus unterschiedlichen Quellregionen nach zu verfolgen. Im Falle der Mongolischen Gobi zeigte sich eine weitestgehende Homogenität der Minerale und chemischen Elemente in den Sedimenten. Laser Bebohrungen einzelner Sandkörner zeigten nur sehr leichte Unterschiede in Abhängigkeit der Quellregionen. Die Spektren der Minerale und untersuchten Elemente deuten auf graitische Zusammensetzungen hin. Die, im Untersuchungsgebiet weit verbreiteten Alkali-Granite (Jahn et al., 2009) zeigten sich als hauptverantwortlich für die Sedimentproduktion im Untersuchungsgebiet. Neben diesen Mineral- und Elementbestimmungen wurde die Leichtmineralfraktion auf die Charakteristik des Quarzes hin untersucht. Dazu wurden Quarzgehalt, Kristallisation und das Elektronen-Spin-Resonanz Signal des E’1 - Centers in Sauerstoff Fehlstellungen des SiO2 Gitters bestimmt. Die Untersuchungen sind mit dem Methodenvorschlag von Sun et al. (2007) durchgeführt worden und sind prinzipiell gut geeignet um Herkunftsanalysenrndurchzuführen. Eine signifikante Zuordnung der einzelnen Quellgebiete ist jedoch auch in dieser Proxy nicht zu finden gewesen.
Resumo:
The purpose of this thesis is the atomic-scale simulation of the crystal-chemical and physical (phonon, energetic) properties of some strategically important minerals for structural ceramics, biomedical and petrological applications. These properties affect the thermodynamic stability and rule the mineral-environment interface phenomena, with important economical, (bio)technological, petrological and environmental implications. The minerals of interest belong to the family of phyllosilicates (talc, pyrophyllite and muscovite) and apatite (OHAp), chosen for their importance in industrial and biomedical applications (structural ceramics) and petrophysics. In this thesis work we have applicated quantum mechanics methods, formulas and knowledge to the resolution of mineralogical problems ("Quantum Mineralogy”). The chosen theoretical approach is the Density Functional Theory (DFT), along with periodic boundary conditions to limit the portion of the mineral in analysis to the crystallographic cell and the hybrid functional B3LYP. The crystalline orbitals were simulated by linear combination of Gaussian functions (GTO). The dispersive forces, which are important for the structural determination of phyllosilicates and not properly con-sidered in pure DFT method, have been included by means of a semi-empirical correction. The phonon and the mechanical properties were also calculated. The equation of state, both in athermal conditions and in a wide temperature range, has been obtained by means of variations in the volume of the cell and quasi-harmonic approximation. Some thermo-chemical properties of the minerals (isochoric and isobaric thermal capacity) were calculated, because of their considerable applicative importance. For the first time three-dimensional charts related to these properties at different pressures and temperatures were provided. The hydroxylapatite has been studied from the standpoint of structural and phonon properties for its biotechnological role. In fact, biological apatite represents the inorganic phase of vertebrate hard tissues. Numerous carbonated (hydroxyl)apatite structures were modelled by QM to cover the broadest spectrum of possible biological structural variations to fulfil bioceramics applications.
Resumo:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
A 6-month-long, bench-scale simulation of an industrial wastewater stabilization pond (WSP) system was conducted to evaluate responses to several potential performance-enhancing treatments. The industrial WSP system consists of an anaerobic primary (1ry) WSP treating high-strength wastewater, followed by facultative secondary (2ry) and aerobic tertiary (3ry) WSPs in series treating lower-strength wastewater. The 1ry WSP was simulated with four glass aquaria which were fed with wastewater from the actual WSP system. The treatments examined were phosphorus supplementation (PHOS), phosphorus supplementation with pH control (PHOS+ALK), and phosphorus supplementation with pH control and effluent recycle (PHOS+ALK+RCY). The supplementary phosphorus treatment alone did not yield any significant change versus the CONTROL 1ry model pond. The average carbon to phosphorus ratio of the feed wastewater received from the WSP system was already 100:0.019 (i.e., 2,100 mg/l: 0.4 mg/l). The pH-control treatments (PHOS+ALK and PHOS+ALK+RCY) produced significant results, with 9 to 12 percent more total organic carbon (TOC) removal, 43 percent more volatile organic acid (VOA) generation, 78 percent more 2-ethoxyethanol and 14 percent more bis(2-chloroethyl)ether removal, and from 100- to 10,000-fold increases in bacterial enzyme activity and heterotrophic bacterial numbers. Recycling a 10-percent portion of the effluent yielded less variability for certain physicochemical parameters in the PHOS+ALK+RCY 1ry model pond, but overall there was no statistically-detectable improvement in performance versus no recycle. The 2ry and 3ry WSPs were also simulated in the laboratory to monitor the effect and fate of increased phosphorus loadings, as might occur if supplemental phosphorus were added to the 1ry WSP. Noticeable increases in algal growth were observed at feed phosphorus concentrations of 0.5 mg/l; however, there were no significant changes in the monitored physicochemical parameters. The effluent phosphorus concentrations from both the 2ry and 3ry model ponds did increase notably when feed phosphorus concentrations were increased from 0.5 to 1.0 mg/l. ^
Resumo:
Holocene climate variability is investigated in the North Pacific and North Atlantic realms, using alkenone-derived sea-surface temperature (SST) records as well as a millennial scale simulation with a coupled atmosphere-ocean general circulation model (AOGCM). The alkenone SST data indicate a temperature increase over almost the entire North Pacific from 7 cal kyr BP to the present. A dipole pattern with a continuous cooling in the northeastern Atlantic and a warming in the eastern Mediterranean Sea and the northern Red Sea is detected in the North Atlantic realm. Similarly, SST variations are opposite in sign between the northeastern Pacific and the northeastern Atlantic. A 2300 year long AOGCM climate simulation reveals a similar SST seesaw between the northeastern Pacific and the northeastern Atlantic on centennial time scales. Our analysis of the alkenone SST data and the model results suggests fundamental inter-oceanic teleconnections during the Holocene.
Resumo:
The present state of de preparation of an experiment on floating liquid zones to be performed in the first Spacelab flight is presented. In this experiment,a liquid bridge is to be placed between two parallel coaxial disks (in the Fluid Physics Module)and subjected to very precise disturbances in order to check the theoretical predictions about its stability limits and behavior under mechanical inputs: stretching of the zone, filling or removing the liquid,axial vibration, rotation, disalignment, etc. Several aspects of the research are introduced:1) Relevance of the study. 2) Theoretical predictions of the liquid behavior regarding the floating-zone stability limits and the expected response to vibrational and rotational disturbances. 3) Ground support experiments using the Plateau technique or the small scale simulation. 4) Instrumental aspects of the experimentation: the Fluid Physics Module utilization and post-flight data analysis.5)Research program for future flights.
Resumo:
Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
Forests have a prominent role in carbon storage and sequestration. Anthropogenic forcing has the potential to accelerate climate change and alter the distribution of forests. How forests redistribute spatially and temporally in response to climate change can alter their carbon sequestration potential. The driving question for this research was: How does plant migration from climate change impact vegetation distribution and carbon sequestration potential over continental scales? Large-scale simulation of the equilibrium response of vegetation and carbon from future climate change has shown relatively modest net gains in sequestration potential, but studies of the transient response has been limited to the sub-continent or landscape scale. The transient response depends on fine scale processes such as competition, disturbance, landscape characteristics, dispersal, and other factors, which makes it computational prohibitive at large domain sizes. To address this, this research used an advanced mechanistic model (Ecosystem Demography Model, ED) that is individually based, but pseudo-spatial, that reduces computational intensity while maintaining the fine scale processes that drive the transient response. First, the model was validated against remote sensing data for current plant functional type distribution in northern North America with a current climatology, and then a future climatology was used to predict the potential equilibrium redistribution of vegetation and carbon from future climate change. Next, to enable transient calculations, a method was developed to simulate the spatially explicit process of dispersal in pseudo-spatial modeling frameworks. Finally, the new dispersal sub-model was implemented in the mechanistic ecosystem model, and a model experimental design was designed and completed to estimate the transient response of vegetation and carbon to climate change. The potential equilibrium forest response to future climate change was found to be large, with large gross changes in distribution of plant functional types and comparatively smaller changes in net carbon sequestration potential for the region. However, the transient response was found to be on the order of centuries, and to depend strongly on disturbance rates and dispersal distances. Future work should explore the impact of species-specific disturbance and dispersal rates, landscape fragmentation, and other processes that influence migration rates and have been simulated at the sub-continent scale, but now at continental scales, and explore a range of alternative future climate scenarios as they continue to be developed.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.