891 resultados para data-driven simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arid and semiarid landscapes comprise nearly a third of the Earth's total land surface. These areas are coming under increasing land use pressures. Despite their low productivity these lands are not barren. Rather, they consist of fragile ecosystems vulnerable to anthropogenic disturbance.

The purpose of this thesis is threefold: (I) to develop and test a process model of wind-driven desertification, (II) to evaluate next-generation process-relevant remote monitoring strategies for use in arid and semiarid regions, and (III) to identify elements for effective management of the world's drylands.

In developing the process model of wind-driven desertification in arid and semiarid lands, field, remote sensing, and modeling observations from a degraded Mojave Desert shrubland are used. This model focuses on aeolian removal and transport of dust, sand, and litter as the primary mechanisms of degradation: killing plants by burial and abrasion, interrupting natural processes of nutrient accumulation, and allowing the loss of soil resources by abiotic transport. This model is tested in field sampling experiments at two sites and is extended by Fourier Transform and geostatistical analysis of high-resolution imagery from one site.

Next, the use of hyperspectral remote sensing data is evaluated as a substantive input to dryland remote monitoring strategies. In particular, the efficacy of spectral mixture analysis (SMA) in discriminating vegetation and soil types and detennining vegetation cover is investigated. The results indicate that hyperspectral data may be less useful than often thought in determining vegetation parameters. Its usefulness in determining soil parameters, however, may be leveraged by developing simple multispectral classification tools that can be used to monitor desertification.

Finally, the elements required for effective monitoring and management of arid and semiarid lands are discussed. Several large-scale multi-site field experiments are proposed to clarify the role of wind as a landscape and degradation process in dry lands. The role of remote sensing in monitoring the world's drylands is discussed in terms of optimal remote sensing platform characteristics and surface phenomena which may be monitored in order to identify areas at risk of desertification. A desertification indicator is proposed that unifies consideration of environmental and human variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we apply an analytical model [V.V. Kulagin et al., Phys. Plasmas 14, 113101 (2007)] to describe the acceleration of an ultra-thin electron layer by a schematic single-cycle laser pulse and compare with one-dimensional particle-in-cell (1D-PIC) simulations. This is in the context of creating a relativistic mirror for coherent backscattering and supplements two related papers in this EPJD volume. The model is shown to reproduce the 1D-PIC results almost quantitatively for the short time of a few laser periods sufficient for the backscattering of ultra-short probe pulses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.

Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.

By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes a series of experimental, numerical, and analytical studies involving the Caltech magnetohydrodynamically (MHD)-driven plasma jet experiment. The plasma jet is created via a capacitor discharge that powers a magnetized coaxial planar electrodes system. The jet is collimated and accelerated by the MHD forces.

We present three-dimensional ideal MHD finite-volume simulations of the plasma jet experiment using an astrophysical magnetic tower as the baseline model. A compact magnetic energy/helicity injection is exploited in the simulation analogous to both the experiment and to astrophysical situations. Detailed analysis provides a comprehensive description of the interplay of magnetic force, pressure, and flow effects. We delineate both the jet structure and the transition process that converts the injected magnetic energy to other forms.

When the experimental jet is sufficiently long, it undergoes a global kink instability and then a secondary local Rayleigh-Taylor instability caused by lateral acceleration of the kink instability. We present an MHD theory of the Rayleigh-Taylor instability on the cylindrical surface of a plasma flux rope in the presence of a lateral external gravity. The Rayleigh-Taylor instability is found to couple to the classic current-driven instability, resulting in a new type of hybrid instability. The coupled instability, produced by combination of helical magnetic field, curvature of the cylindrical geometry, and lateral gravity, is fundamentally different from the classic magnetic Rayleigh-Taylor instability occurring at a two-dimensional planar interface.

In the experiment, this instability cascade from macro-scale to micro-scale eventually leads to the failure of MHD. When the Rayleigh-Taylor instability becomes nonlinear, it compresses and pinches the plasma jet to a scale smaller than the ion skin depth and triggers a fast magnetic reconnection. We built a specially designed high-speed 3D magnetic probe and successfully detected the high frequency magnetic fluctuations of broadband whistler waves associated with the fast reconnection. The magnetic fluctuations exhibit power-law spectra. The magnetic components of single-frequency whistler waves are found to be circularly polarized regardless of the angle between the wave propagation direction and the background magnetic field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Super-resolution filters based on a Gaussian beam are proposed to reduce the focusing spot in optical data storage systems. Both of amplitude filters and pure-phase filters are designed respectively to gain the desired intensity distributions. Their performances are analysed and compared with those based on plane wave in detail. The energy utilizations are presented. The simulation results show that our designed super-resolution filters are favourable for use in optical data storage systems in terms of performance and energy utilization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and test a method to estimate relative abundance from catch and effort data using neural networks. Most stock assessment models use time series of relative abundance as their major source of information on abundance levels. These time series of relative abundance are frequently derived from catch-per-unit-of-effort (CPUE) data, using general linearized models (GLMs). GLMs are used to attempt to remove variation in CPUE that is not related to the abundance of the population. However, GLMs are restricted in the types of relationships between the CPUE and the explanatory variables. An alternative approach is to use structural models based on scientific understanding to develop complex non-linear relationships between CPUE and the explanatory variables. Unfortunately, the scientific understanding required to develop these models may not be available. In contrast to structural models, neural networks uses the data to estimate the structure of the non-linear relationship between CPUE and the explanatory variables. Therefore neural networks may provide a better alternative when the structure of the relationship is uncertain. We use simulated data based on a habitat based-method to test the neural network approach and to compare it to the GLM approach. Cross validation and simulation tests show that the neural network performed better than nominal effort and the GLM approach. However, the improvement over GLMs is not substantial. We applied the neural network model to CPUE data for bigeye tuna (Thunnus obesus) in the Pacific Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The high demanding computational requirements necessary to carry out protein motion simulations make it difficult to obtain information related to protein motion. On the one hand, molecular dynamics simulation requires huge computational resources to achieve satisfactory motion simulations. On the other hand, less accurate procedures such as interpolation methods, do not generate realistic morphs from the kinematic point of view. Analyzing a protein's movement is very similar to serial robots; thus, it is possible to treat the protein chain as a serial mechanism composed of rotational degrees of freedom. Recently, based on this hypothesis, new methodologies have arisen, based on mechanism and robot kinematics, to simulate protein motion. Probabilistic roadmap method, which discretizes the protein configurational space against a scoring function, or the kinetostatic compliance method that minimizes the torques that appear in bonds, aim to simulate protein motion with a reduced computational cost. Results: In this paper a new viewpoint for protein motion simulation, based on mechanism kinematics is presented. The paper describes a set of methodologies, combining different techniques such as structure normalization normalization processes, simulation algorithms and secondary structure detection procedures. The combination of all these procedures allows to obtain kinematic morphs of proteins achieving a very good computational cost-error rate, while maintaining the biological meaning of the obtained structures and the kinematic viability of the obtained motion. Conclusions: The procedure presented in this paper, implements different modules to perform the simulation of the conformational change suffered by a protein when exerting its function. The combination of a main simulation procedure assisted by a secondary structure process, and a side chain orientation strategy, allows to obtain a fast and reliable simulations of protein motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, train control in-lab simulation tools play a crucial role in reducing extensive and expensive on-site railway testing activities. In this paper, we present our contribution in this arena by detailing the internals of our European Railway Train Management System in-lab demonstrator. This demonstrator is built over a general-purpose simulation framework, Riverbed Modeler, previously Opnet Modeler. Our framework models both ERTMS subsystems, the Automatic Train Protection application layer based on movement authority message exchange and the telecommunication subsystem based on GSM-R communication technology. We provide detailed information on our modelling strategy. We also validate our simulation framework with real trace data. To conclude, under current industry migration scenario from GSM-R legacy obsolescence to IP-based heterogeneous technologies, our simulation framework represents a singular tool to railway operators. As an example, we present the assessment of related performance indicators for a specific railway network using a candidate replacement technology, LTE, versus current legacy technology. To the best of our knowledge, there is no similar initiative able to measure the impact of the telecommunication subsystem in the railway network availability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contributions are results from simulation experiments designed to measure the accuracy of statistical inferences derived from some of these models. Our results show that a model commonly used to analyze calibration data can provide unreliable statistical results when there is between-tow spatial variation in the stock densities at each paired-tow site. However, a generalized linear mixed-effects model gave very reliable results over a wide range of spatial variations in densities and we recommend it for the analysis of paired-tow survey calibration data. This conclusion also applies if there is between-tow variation in catchability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized Bayesian population dynamics model was developed for analysis of historical mark-recapture studies. The Bayesian approach builds upon existing maximum likelihood methods and is useful when substantial uncertainties exist in the data or little information is available about auxiliary parameters such as tag loss and reporting rates. Movement rates are obtained through Markov-chain Monte-Carlo (MCMC) simulation, which are suitable for use as input in subsequent stock assessment analysis. The mark-recapture model was applied to English sole (Parophrys vetulus) off the west coast of the United States and Canada and migration rates were estimated to be 2% per month to the north and 4% per month to the south. These posterior parameter distributions and the Bayesian framework for comparing hypotheses can guide fishery scientists in structuring the spatial and temporal complexity of future analyses of this kind. This approach could be easily generalized for application to other species and more data-rich fishery analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abundance indices derived from fishery-independent surveys typically exhibit much higher interannual variability than is consistent with the within-survey variance or the life history of a species. This extra variability is essentially observation noise (i.e. measurement error); it probably reflects environmentally driven factors that affect catchability over time. Unfortunately, high observation noise reduces the ability to detect important changes in the underlying population abundance. In our study, a noise-reduction technique for uncorrelated observation noise that is based on autoregressive integrated moving average (ARIMA) time series modeling is investigated. The approach is applied to 18 time series of finfish abundance, which were derived from trawl survey data from the U.S. northeast continental shelf. Although the a priori assumption of a random-walk-plus-uncorrelated-noise model generally yielded a smoothed result that is pleasing to the eye, we recommend that the most appropriate ARIMA model be identified for the observed time series if the smoothed time series will be used for further analysis of the population dynamics of a species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

xlix, 121 p.