975 resultados para Numerical experiments
Resumo:
We propose a positive, accurate moment closure for linear kinetic transport equations based on a filtered spherical harmonic (FP_N) expansion in the angular variable. The FP_N moment equations are accurate approximations to linear kinetic equations, but they are known to suffer from the occurrence of unphysical, negative particle concentrations. The new positive filtered P_N (FP_N+) closure is developed to address this issue. The FP_N+ closure approximates the kinetic distribution by a spherical harmonic expansion that is non-negative on a finite, predetermined set of quadrature points. With an appropriate numerical PDE solver, the FP_N+ closure generates particle concentrations that are guaranteed to be non-negative. Under an additional, mild regularity assumption, we prove that as the moment order tends to infinity, the FP_N+ approximation converges, in the L2 sense, at the same rate as the FP_N approximation; numerical tests suggest that this assumption may not be necessary. By numerical experiments on the challenging line source benchmark problem, we confirm that the FP_N+ method indeed produces accurate and non-negative solutions. To apply the FP_N+ closure on problems at large temporal-spatial scales, we develop a positive asymptotic preserving (AP) numerical PDE solver. We prove that the propose AP scheme maintains stability and accuracy with standard mesh sizes at large temporal-spatial scales, while, for generic numerical schemes, excessive refinements on temporal-spatial meshes are required. We also show that the proposed scheme preserves positivity of the particle concentration, under some time step restriction. Numerical results confirm that the proposed AP scheme is capable for solving linear transport equations at large temporal-spatial scales, for which a generic scheme could fail. Constrained optimization problems are involved in the formulation of the FP_N+ closure to enforce non-negativity of the FP_N+ approximation on the set of quadrature points. These optimization problems can be written as strictly convex quadratic programs (CQPs) with a large number of inequality constraints. To efficiently solve the CQPs, we propose a constraint-reduced variant of a Mehrotra-predictor-corrector algorithm, with a novel constraint selection rule. We prove that, under appropriate assumptions, the proposed optimization algorithm converges globally to the solution at a locally q-quadratic rate. We test the algorithm on randomly generated problems, and the numerical results indicate that the combination of the proposed algorithm and the constraint selection rule outperforms other compared constraint-reduced algorithms, especially for problems with many more inequality constraints than variables.
Resumo:
We shall consider the weak formulation of a linear elliptic model problem with discontinuous Dirichlet boundary conditions. Since such problems are typically not well-defined in the standard H^1-H^1 setting, we will introduce a suitable saddle point formulation in terms of weighted Sobolev spaces. Furthermore, we will discuss the numerical solution of such problems. Specifically, we employ an hp-discontinuous Galerkin method and derive an L^2-norm a posteriori error estimate. Numerical experiments demonstrate the effectiveness of the proposed error indicator in both the h- and hp-version setting. Indeed, in the latter case exponential convergence of the error is attained as the mesh is adaptively refined.
Resumo:
In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.
Resumo:
Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
The focus of the current dissertation is to study qualitatively the underlying physics of vortex-shedding and wake dynamics in long aspect-ratio aerodynamics in incompressible viscous flow through the use of the KLE method. We carried out a long series of numerical experiments in the cases of flow around the cylinder at low Reynolds numbers. The study of flow at low Reynolds numbers provides an insight in the fluid physics and also plays a critical role when applying to stalled turbine rotors. Many of the conclusions about the qualitative nature of the physical mechanisms characterizing vortex formation, shedding and further interaction analyzed here at low Re could be extended to other Re regimes and help to understand the separation of the boundary layers in airfoils and other aerodynamic surfaces. In the long run, it aims to provide a better understanding of the complex multi-physics problems involving fluid-structure-control interaction through improved mathematical computational models of the multi-physics process. Besides the scientific conclusions produced, the research work on streamlined and bluff-body condition will also serve as a valuable guide for the future design of blade aerodynamics and the placement of wind turbines and hydrakinetic turbines, increasing the efficiency in the use of expensive workforce, supplies, and infrastructure. After the introductory section describing the main fields of application of wind power and hydrokinetic turbines, we describe the main features and theoretical background of the numerical method used here. Then, we present the analysis of the numerical experimentation results for the oscillatory regime right before the onset of vortex shedding for circular cylinders. We verified the wake length of the closed near-wake behind the cylinder and analysed the decay of the wake at the wake formation region, and then studied the St-Re relationship at the Reynolds numbers before the wake sheds compared to the experimental data. We found a theoretical model that describes the time evolution of the amplitude of fluctuations in the vorticity field on the twin vortex wake, which accurately matches the numerical results in terms of the frequency of the oscillation and rate of decay. We also proposed a model based on an analog circuit that is able to interpret the concerning flow by reducing the number of degrees of freedom. It follows the idea of the non-linear oscillator and resembles the dynamics mechanism of the closed near-wake with a common configured sine wave oscillator. This low-dimensional circuital model may also help to understand the underlying physical mechanisms, related to vorticity transport, that give origin to those oscillations.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Se calculó la obtención de las constantes ópticas usando el método de Wolfe. Dichas contantes: coeficiente de absorción (α), índice de refracción (n) y espesor de una película delgada (d ), son de importancia en el proceso de caracterización óptica del material. Se realizó una comparación del método del Wolfe con el método empleado por R. Swanepoel. Se desarrolló un modelo de programación no lineal con restricciones, de manera que fue posible estimar las constantes ópticas de películas delgadas semiconductoras, a partir únicamente, de datos de transmisión conocidos. Se presentó una solución al modelo de programación no lineal para programación cuadrática. Se demostró la confiabilidad del método propuesto, obteniendo valores de α = 10378.34 cm−1, n = 2.4595, d =989.71 nm y Eg = 1.39 Ev, a través de experimentos numéricos con datos de medidas de transmitancia espectral en películas delgadas de Cu3BiS3.
Resumo:
Decarbonization of maritime transport requires immediate action. In the short term, ship weather routing can provide greenhouse gas emission reductions, even for existing ships and without retrofitting them. Weather routing is based on making optimal use of both envi- ronmental information and knowledge about vessel seakeeping and performance. Combining them at a state-of-the-art level and making use of path planning in realistic conditions can be challenging. To address these topics in an open-source framework, this thesis led to the development of a new module called bateau , and to its combination with the ship routing model VISIR. bateau includes both hull geometry and propulsion modelling for various vessel types. It has two objectives: to predict the sustained speed in a seaway and to estimate the CO2 emission rate during the voyage. Various semi-empirical approaches were used in bateau to predict the ship hydro- and aerodynamical resistance in both head and oblique seas. Assuming that the ship sails at a constant engine load, the involuntary speed loss due to waves was estimated. This thesis also attempted to clarify the role played by the actual representation of the sea state. In particular, the influence of the wave steepness parameter was assessed. For dealing with ships with a greater superstructure, the wind added resistance was also estimated. Numerical experiments via bateau were conducted for both a medium and a large-size container ships, a bulk-carrier, and a tanker. The simulations of optimal routes were carried out for a feeder containership during voyages in the North Indian Ocean and in the South China Sea. Least-CO2 routes were compared to the least-distance ones, assessing the relative CO2 savings. Analysis fields from the Copernicus Marine Service were used in the numerical experiments.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
The Mediterranean Sea is a semi-enclosed basin connected to the Atlantic Ocean through the narrow and shallow Strait of Gibraltar and further subdivided in two different sub-basins, the Eastern Mediterranean and the Western Mediterranean, connected through the Stait of Sicily. On annual basis, a net heat budget of −7 W/m2, combined with exceeding evaporation over precipation and runoff together with wind stress, is responsible for the antiestuarine character of the zonal thermoaline circulation. The outflow at Gibraltar Strait is mainly composed of Levantine Intermediate Water (LIW) and deep water masses formed in the Western Mediterranean Sea. The aim of this thesis is to validate and quantitatively assess the main routes of water masses composing the ouflow at Gibraltar Strait, using for the first time in the Mediterranean Sea a lagrangian interpretation of the eulerian velocity field produced from an eddy-resolving reanalysis dataset, spanning from 2000 to 2012. A lagrangian model named Ariane is used to map out three-dimensional trajectories in order to describe the pathways of water mass transport from the Strait of Sicily, the Gulf of Lyon and the Northern Tyrrhenian Sea to the Gibraltar Strait. Numerical experiments were carried out by seeding millions of particles in the Strait of Gibraltar and following them backwards in time to track the origins of water masses and transport exchanged between the different sections of the Mediterranean. Finally, the main routes of the intermediate and deep water masses are reconstructed from virtual particles trajectories, which highlight the role of the Western Mediterranean Deep Water (WMDW) as the main contributor to the Gibraltar Strait outflow. For the first time, the quantitative description of the flow of water masses coming from the Eastern Mediterranean towards the Gibraltar Strait is provided and a new route that directly links the Northern Tyrrhenian Sea to Gibralatr Strait has been detected.
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
This work provides a numerical and experimental investigation of fatigue crack growth behavior in steel weldments including crack closure effects and their coupled interaction with weld strength mismatch. A central objective of this study is to extend previously developed frameworks for evaluation of crack clo- sure effects on FCGR to steel weldments while, at the same time, gaining additional understanding of commonly adopted criteria for crack closure loads and their influence on fatigue life of structural welds. Very detailed non-linear finite element analyses using 3-D models of compact tension C ( T ) fracture spec- imens with center cracked, square groove welds provide the evolution of crack growth with cyclic stress intensity factor which is required for the estimation of the closure loads. Fatigue crack growth tests con- ducted on plane-sided, shallow-cracked C ( T ) specimens provide the necessary data against which crack closure effects on fatigue crack growth behavior can be assessed. Overall, the present investigation pro- vides additional support for estimation procedures of plasticity-induced crack closure loads in fatigue analyses of structural steels and their weldments