901 resultados para stochastic simulation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The catastrophic event of red tide has happened in the Strait of Hormuz, the Persian Gulf and Gulf of Oman from late summer 2008 to spring 2009. With its devastating effects, the phenomenon shocked all the countries located in the margin of the Persian Gulf and the Gulf of Oman and caused considerable losses to fishery industries, tourism, and tourist and trade economy of the region. In the maritime cruise carried out by the Persian Gulf and Gulf of Oman Ecological Research Institute, field data, including temperature, salinity, chlorophyll-a, dissolved oxygen and algal density were obtained for this research. Satellite information was received from MODIS and MERIS and SeaWiFS sensors. Temperature and surface chlorophyll images were obtained and compared with the field data and data of PROBE model. The results obtained from the present research indicated that with the occurrence of harmful algal blooms (HAB), the Chlorophyll-a and the dissolved oxygen contents increased in the surface water. Maximum algal density was seen in the northern coasts of the Strait of Hormuz. Less concentration of algal density was detected in deep and surface offshore water. Our results show that the occurred algal bloom was the result of seawater temperature drop, water circulation and the adverse environmental pollutions caused by industrial and urban sewages entering the coastal waters in this region of the Persian Gulf ,This red tide phenomenon was started in the Strait of Hormuz and eventually covered about 140,000 km2 of the Persian Gulf and total area of Strait of Hormuz and it survived for 10 months which is a record amongst the occurred algal blooms across the world. Temperature and chlorophyll satellite images were proportionate to the measured values obtained by the field method. This indicates that satellite measurements have acceptable precisions and they can be used in sea monitoring and modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The anticipated growth of air traffic worldwide requires enhanced Air Traffic Management (ATM) technologies and procedures to increase the system capacity, efficiency, and resilience, while reducing environmental impact and maintaining operational safety. To deal with these challenges, new automation and information exchange capabilities are being developed through different modernisation initiatives toward a new global operational concept called Trajectory Based Operations (TBO), in which aircraft trajectory information becomes the cornerstone of advanced ATM applications. This transformation will lead to higher levels of system complexity requiring enhanced Decision Support Tools (DST) to aid humans in the decision making processes. These will rely on accurate predicted aircraft trajectories, provided by advanced Trajectory Predictors (TP). The trajectory prediction process is subject to stochastic effects that introduce uncertainty into the predictions. Regardless of the assumptions that define the aircraft motion model underpinning the TP, deviations between predicted and actual trajectories are unavoidable. This thesis proposes an innovative method to characterise the uncertainty associated with a trajectory prediction based on the mathematical theory of Polynomial Chaos Expansions (PCE). Assuming univariate PCEs of the trajectory prediction inputs, the method describes how to generate multivariate PCEs of the prediction outputs that quantify their associated uncertainty. Arbitrary PCE (aPCE) was chosen because it allows a higher degree of flexibility to model input uncertainty. The obtained polynomial description can be used in subsequent prediction sensitivity analyses thanks to the relationship between polynomial coefficients and Sobol indices. The Sobol indices enable ranking the input parameters according to their influence on trajectory prediction uncertainty. The applicability of the aPCE-based uncertainty quantification detailed herein is analysed through a study case. This study case represents a typical aircraft trajectory prediction problem in ATM, in which uncertain parameters regarding aircraft performance, aircraft intent description, weather forecast, and initial conditions are considered simultaneously. Numerical results are compared to those obtained from a Monte Carlo simulation, demonstrating the advantages of the proposed method. The thesis includes two examples of DSTs (Demand and Capacity Balancing tool, and Arrival Manager) to illustrate the potential benefits of exploiting the proposed uncertainty quantification method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Florestal, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES AND STUDY METHOD: There are two subjects in this thesis: “Lot production size for a parallel machine scheduling problem with auxiliary equipment” and “Bus holding for a simulated traffic network”. Although these two themes seem unrelated, the main idea is the optimization of complex systems. The “Lot production size for a parallel machine scheduling problem with auxiliary equipment” deals with a manufacturing setting where sets of pieces form finished products. The aim is to maximize the profit of the finished products. Each piece may be processed in more than one mold. Molds must be mounted on machines with their corresponding installation setup times. The key point of our methodology is to solve the single period lot-sizing decisions for the finished products together with the piece-mold and the mold-machine assignments, relaxing the constraint that a single mold may not be used in two machines at the same time. For the “Bus holding for a simulated traffic network” we deal with One of the most annoying problems in urban bus operations is bus bunching, which happens when two or more buses arrive at a stop nose to tail. Bus bunching reflects an unreliable service that affects transit operations by increasing passenger-waiting times. This work proposes a linear mathematical programming model that establishes bus holding times at certain stops along a transit corridor to avoid bus bunching. Our approach needs real-time input, so we simulate a transit corridor and apply our mathematical model to the data generated. Thus, the inherent variability of a transit system is considered by the simulation, while the optimization model takes into account the key variables and constraints of the bus operation. CONTRIBUTIONS AND CONCLUSIONS: For the “Lot production size for a parallel machine scheduling problem with auxiliary equipment” the relaxation we propose able to find solutions more efficiently, moreover our experimental results show that most of the solutions verify that molds are non-overlapping even if they are installed on several machines. We propose an exact integer linear programming, a Relax&Fix heuristic, and a multistart greedy algorithm to solve this problem. Experimental results on instances based on real-world data show the efficiency of our approaches. The mathematical model and the algorithm for the lot production size problem, showed in this research, can be used for production planners to help in the scheduling of the manufacturing. For the “Bus holding for a simulated traffic network” most of the literature considers quadratic models that minimize passenger-waiting times, but they are harder to solve and therefore difficult to operate by real-time systems. On the other hand, our methodology reduces passenger-waiting times efficiently given our linear programming model, with the characteristic of applying control intervals just every 5 minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The voltage source inverter (VSI) and current voltage source inverter (CSI) are widely used in industrial application. But the traditional VSIs and CSIs have one common problem: can’t boost or buck the voltage come from battery, which make them impossible to be used alone in Hybrid Electric Vehicle (HEV/EV) motor drive application, other issue is the traditional inverter need to add the dead-band time into the control sequence, but it will cause the output waveform distortion. This report presents an impedance source (Z-source network) topology to overcome these problems, it can use one stage instead of two stages (VSI or CSI + boost converter) to buck/boost the voltage come from battery in inverter system. Therefore, the Z-source topology hardware design can reduce switching element, entire system size and weight, minimize the system cost and increase the system efficiency. Also, a modified space vector pulse-width modulation (SVPWM) control method has been selected with the Z-source network together to achieve the best efficiency and lower total harmonic distortion (THD) at different modulation indexes. Finally, the Z-source inverter controlling will modulate under two control sequences: sinusoidal pulse width modulation (SPWM) and SVPWM, and their output voltage, ripple and THD will be compared.