980 resultados para EFFICIENT SIMULATION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe a real-time system that supports design of optimal flight paths over terrains. These paths either maximize view coverage or minimize vehicle exposure to ground. A volume-rendered display of multi-viewpoint visibility and a haptic interface assists the user in selecting, assessing, and refining the computed flight path. We design a three-dimensional scalar field representing the visibility of a point above the terrain, describe an efficient algorithm to compute the visibility field, and develop visual and haptic schemes to interact with the visibility field. Given the origin and destination, the desired flight path is computed using an efficient simulation of an articulated rope under the influence of the visibility gradient. The simulation framework also accepts user input, via the haptic interface, thereby allowing manual refinement of the flight path.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The voltammetry for the reduction of 2-nitrotoluene at a gold microdisk electrode is reported in two ionic liquids: trihexyltetradecylphosphonium tris(pentafluoroethyl)trifluorophosphate ([P-14,P-6,P-6,P-6][FAP]) and 1-ethyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide ([Emim][NTf2]). The reduction of nitrocyclopentane (NCP) and 1-nitrobutane (BuN) was investigated using voltammetry at a gold microdisk electrode in the ionic liquid [P-14,P-6,P-6,P-6][FAP]. Simulated voltammograms, generated through the use of ButlerVolmer theory and symmetric MarcusHush theory, were compared to experimental data, with both theories parametrizing the data similarly well. An experimental value for the Marcusian parameter, 1 was also determined in all cases. For the reduction of 2-nitrotoluene, this was 0.5 +/- 0.1 eV in both solvents, while for NCP and BuN in [P-14,P-6,P-6,P-6][FAP], it was 2 +/- 0.1 and 5 +/- 0.1 eV, respectively. This is attributed to the localization of charge on the nitro group and the primary nitro alkyls increased interaction with the environment, resulting in a larger reorganization energy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Asymmetric MarcusHush (AMH) theory is applied for the first time in ionic solvents to model the voltammetric reduction of oxygen in 1-butyl-1-methylpyrrolidinium bis-(trifluoromethylsulfonyl)-imide and of 2-nitrotoluene (2-NT), nitrocyclopentane (NCP), and 1-nitro-butane (BuN) in trihexyltetradecylphosphonium tris(pentafluoroethyl)trifluorophosphate on a gold microdisc electrode. An asymmetry parameter, gamma, was estimated for all systems as -0.4 for the reduction of oxygen and -0.05, 0.25, and 0 +/- 0.05 for the reductions of 2-NT, NCP, and BuN, respectively, which suggests equal force constants of reactants and products in the case of 2-NT and BuN and unequal force constants for oxygen and NCP where the force constants of the oxidized species are greater than the reduced species in the case of oxygen and less than the reduced species in the case of NCP. Previously measured values for a, the Butler-Volmer transfer coefficient, reflect this in each case. Where appreciable asymmetry occurs, AMH theory was seen to parametrize the experimental data better than either Butler-Volmer or symmetric Marcus-Hush theory, allowing additionally the extraction of reorganization energy. This is the first study to provide key physical insights into electrochemical systems in room-temperature ionic liquids using AMH theory, allowing elucidation of the reorganization energies and the relative force constants of the reactants and products in each reaction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

On retrouve dans la nature un nombre impressionnant de matériaux semi-transparents tels le marbre, le jade ou la peau, ainsi que plusieurs liquides comme le lait ou les jus. Que ce soit pour le domaine cinématographique ou le divertissement interactif, l'intérêt d'obtenir une image de synthèse de ce type de matériau demeure toujours très important. Bien que plusieurs méthodes arrivent à simuler la diffusion de la lumière de manière convaincante a l'intérieur de matériaux semi-transparents, peu d'entre elles y arrivent de manière interactive. Ce mémoire présente une nouvelle méthode de diffusion de la lumière à l'intérieur d'objets semi-transparents hétérogènes en temps réel. Le coeur de la méthode repose sur une discrétisation du modèle géométrique sous forme de voxels, ceux-ci étant utilisés comme simplification du domaine de diffusion. Notre technique repose sur la résolution de l'équation de diffusion à l'aide de méthodes itératives permettant d'obtenir une simulation rapide et efficace. Notre méthode se démarque principalement par son exécution complètement dynamique ne nécessitant aucun pré-calcul et permettant une déformation complète de la géométrie.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Among many structural health monitoring (SHM) methods, guided wave (GW) based method has been found as an effective and efficient way to detect incipient damages. In comparison with other widely used SHM methods, it can propagate in a relatively long range and be sensitive to small damages. Proper use of this technique requires good knowledge of the effects of damage on the wave characteristics. This needs accurate and computationally efficient modeling of guide wave propagation in structures. A number of different numerical computational techniques have been developed for the analysis of wave propagation in a structure. Among them, Spectral Element Method (SEM) has been proposed as an efficient simulation technique. This paper will focus on the application of GW method and SEM in structural health monitoring. The GW experiments on several typical structures will be introduced first. Then, the modeling techniques by using SEM are discussed. © (2014) Trans Tech Publications, Switzerland.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Genomic alterations have been linked to the development and progression of cancer. The technique of Comparative Genomic Hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array-CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data. We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Since the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme and breast cancer are analyzed, and comparisons are made with some widely-used algorithms to illustrate the reliability and success of the technique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe four recent additions to NEURON's suite of graphical tools that make it easier for users to create and manage models: an enhancement to the Channel Builder that facilitates the specification and efficient simulation of stochastic channel models

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (C) 2004 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a fast adaptive Importance Sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First we estimate the minimum Cross-Entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level; finally, the tilting parameter just found is used to estimate the overflow probability of interest. We recognize three distinct properties of the method which together explain why the method works well; we conjecture that they hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In irrigated cropping, as with any other industry, profit and risk are inter-dependent. An increase in profit would normally coincide with an increase in risk, and this means that risk can be traded for profit. It is desirable to manage a farm so that it achieves the maximum possible profit for the desired level of risk. This paper identifies risk-efficient cropping strategies that allocate land and water between crop enterprises for a case study of an irrigated farm in Southern Queensland, Australia. This is achieved by applying stochastic frontier analysis to the output of a simulation experiment. The simulation experiment involved changes to the levels of business risk by systematically varying the crop sowing rules in a bioeconomic model of the case study farm. This model utilises the multi-field capability of the process based Agricultural Production System Simulator (APSIM) and is parameterised using data collected from interviews with a collaborating farmer. We found sowing rules that increased the farm area sown to cotton caused the greatest increase in risk-efficiency. Increasing maize area also improved risk-efficiency but to a lesser extent than cotton. Sowing rules that increased the areas sown to wheat reduced the risk-efficiency of the farm business. Sowing rules were identified that had the potential to improve the expected farm profit by ca. $50,000 Annually, without significantly increasing risk. The concept of the shadow price of risk is discussed and an expression is derived from the estimated frontier equation that quantifies the trade-off between profit and risk.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a computationally efficient model for a dc-dc boost converter, which is valid for continuous and discontinuous conduction modes; the model also incorporates significant non-idealities of the converter. Simulation of the dc-dc boost converter using an average model provides practically all the details, which are available from the simulation using the switching (instantaneous) model, except for the quantum of ripple in currents and voltages. A harmonic model of the converter can be used to evaluate the ripple quantities. This paper proposes a combined (average-cum-harmonic) model of the boost converter. The accuracy of the combined model is validated through extensive simulations and experiments. A quantitative comparison of the computation times of the average, combined and switching models are presented. The combined model is shown to be more computationally efficient than the switching model for simulation of transient and steady-state responses of the converter under various conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Single fluid schemes that rely on an interface function for phase identification in multicomponent compressible flows are widely used to study hydrodynamic flow phenomena in several diverse applications. Simulations based on standard numerical implementation of these schemes suffer from an artificial increase in the width of the interface function owing to the numerical dissipation introduced by an upwind discretization of the governing equations. In addition, monotonicity requirements which ensure that the sharp interface function remains bounded at all times necessitate use of low-order accurate discretization strategies. This results in a significant reduction in accuracy along with a loss of intricate flow features. In this paper we develop a nonlinear transformation based interface capturing method which achieves superior accuracy without compromising the simplicity, computational efficiency and robustness of the original flow solver. A nonlinear map from the signed distance function to the sigmoid type interface function is used to effectively couple a standard single fluid shock and interface capturing scheme with a high-order accurate constrained level set reinitialization method in a way that allows for oscillation-free transport of the sharp material interface. Imposition of a maximum principle, which ensures that the multidimensional preconditioned interface capturing method does not produce new maxima or minima even in the extreme events of interface merger or breakup, allows for an explicit determination of the interface thickness in terms of the grid spacing. A narrow band method is formulated in order to localize computations pertinent to the preconditioned interface capturing method. Numerical tests in one dimension reveal a significant improvement in accuracy and convergence; in stark contrast to the conventional scheme, the proposed method retains its accuracy and convergence characteristics in a shifted reference frame. Results from the test cases in two dimensions show that the nonlinear transformation based interface capturing method outperforms both the conventional method and an interface capturing method without nonlinear transformation in resolving intricate flow features such as sheet jetting in the shock-induced cavity collapse. The ability of the proposed method in accounting for the gravitational and surface tension forces besides compressibility is demonstrated through a model fully three-dimensional problem concerning droplet splash and formation of a crownlike feature. (C) 2014 Elsevier Inc. All rights reserved.