847 resultados para asymptotically hyperbolic
Resumo:
This thesis presents a study of the dynamical, nonlinear interaction of colliding gravitational waves, as described by classical general relativity. It is focused mainly on two fundamental questions: First, what is the general structure of the singularities and Killing-Cauchy horizons produced in the collisions of exactly plane-symmetric gravitational waves? Second, under what conditions will the collisions of almost-plane gravitational waves (waves with large but finite transverse sizes) produce singularities?
In the work on the collisions of exactly-plane waves, it is shown that Killing horizons in any plane-symmetric spacetime are unstable against small plane-symmetric perturbations. It is thus concluded that the Killing-Cauchy horizons produced by the collisions of some exactly plane gravitational waves are nongeneric, and that generic initial data for the colliding plane waves always produce "pure" spacetime singularities without such horizons. This conclusion is later proved rigorously (using the full nonlinear theory rather than perturbation theory), in connection with an analysis of the asymptotic singularity structure of a general colliding plane-wave spacetime. This analysis also proves that asymptotically the singularities created by colliding plane waves are of inhomogeneous-Kasner type; the asymptotic Kasner axes and exponents of these singularities in general depend on the spatial coordinate that runs tangentially to the singularity in the non-plane-symmetric direction.
In the work on collisions of almost-plane gravitational waves, first some general properties of single almost-plane gravitational-wave spacetimes are explored. It is shown that, by contrast with an exact plane wave, an almost-plane gravitational wave cannot have a propagation direction that is Killing; i.e., it must diffract and disperse as it propagates. It is also shown that an almost-plane wave cannot be precisely sandwiched between two null wavefronts; i.e., it must leave behind tails in the spacetime region through which it passes. Next, the occurrence of spacetime singularities in the collisions of almost-plane waves is investigated. It is proved that if two colliding, almost-plane gravitational waves are initially exactly plane-symmetric across a central region of sufficiently large but finite transverse dimensions, then their collision produces a spacetime singularity with the same local structure as in the exact-plane-wave collision. Finally, it is shown that a singularity still forms when the central regions are only approximately plane-symmetric initially. Stated more precisely, it is proved that if the colliding almost-plane waves are initially sufficiently close to being exactly plane-symmetric across a bounded central region of sufficiently large transverse dimensions, then their collision necessarily produces spacetime singularities. In this case, nothing is now known about the local and global structures of the singularities.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we
•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;
•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;
•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;
•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;
•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.
The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we
•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;
•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.
The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.
Resumo:
We report the formulation of an ABCD matrix for reflection and refraction of Gaussian light beams at the surface of a parabola of revolution that separate media of different refractive indices based on optical phase matching. The equations for the spot sizes and wave-front radii of the beams are also obtained by using the ABCD matrix. With these matrices, we can more conveniently design and evaluate some special optical systems, including these kinds of elements. (c) 2005 Optical Society of America
Resumo:
Let PK, L(N) be the number of unordered partitions of a positive integer N into K or fewer positive integer parts, each part not exceeding L. A distribution of the form
Ʃ/N≤x PK,L(N)
is considered first. For any fixed K, this distribution approaches a piecewise polynomial function as L increases to infinity. As both K and L approach infinity, this distribution is asymptotically normal. These results are proved by studying the convergence of the characteristic function.
The main result is the asymptotic behavior of PK,K(N) itself, for certain large K and N. This is obtained by studying a contour integral of the generating function taken along the unit circle. The bulk of the estimate comes from integrating along a small arc near the point 1. Diophantine approximation is used to show that the integral along the rest of the circle is much smaller.
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
Large plane deformations of thin elastic sheets of neo-Hookean material are considered and a method of successive substitutions is developed to solve problems within the two-dimensional theory of finite plane stress. The first approximation is determined by linear boundary value problems on two harmonic functions, and it is approached asymptotically at very large extensions in the plane of the sheet. The second and higher approximations are obtained by solving Poisson equations. The method requires modification when the membrane has a traction-free edge.
Several problems are treated involving infinite sheets under uniform biaxial stretching at infinity. First approximations are obtained when a circular or elliptic inclusion is present and when the sheet has a circular or elliptic hole, including the limiting cases of a line inclusion and a straight crack or slit. Good agreement with exact solutions is found for circularly symmetric deformations. Other examples discuss the stretching of a short wide strip, the deformation near a boundary corner which is traction-free, and the application of a concentrated load to a boundary point.
Resumo:
This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.
Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.
Resumo:
A general class of single degree of freedom systems possessing rate-independent hysteresis is defined. The hysteretic behavior in a system belonging to this class is depicted as a sequence of single-valued functions; at any given time, the current function is determined by some set of mathematical rules concerning the entire previous response of the system. Existence and uniqueness of solutions are established and boundedness of solutions is examined.
An asymptotic solution procedure is used to derive an approximation to the response of viscously damped systems with a small hysteretic nonlinearity and trigonometric excitation. Two properties of the hysteresis loops associated with any given system completely determine this approximation to the response: the area enclosed by each loop, and the average of the ascending and descending branches of each loop.
The approximation, supplemented by numerical calculations, is applied to investigate the steady-state response of a system with limited slip. Such features as disconnected response curves and jumps in response exist for a certain range of system parameters for any finite amount of slip.
To further understand the response of this system, solutions of the initial-value problem are examined. The boundedness of solutions is investigated first. Then the relationship between initial conditions and resulting steady-state solution is examined when multiple steady-state solutions exist. Using the approximate analysis and numerical calculations, it is found that significant regions of initial conditions in the initial condition plane lead to the different asymptotically stable steady-state solutions.
Resumo:
从近轴波动方程出发推导了窄带和宽带双曲正割脉冲光束的解析传输公式。通过数值计算比较了分别采用复振幅包络(CAE)表示式和复解析信号(CAS)表示式得到的脉冲光束,得出了选择脉冲光束研究方法的条件。结果表明复振幅包络表示式得到的解会存在空间奇异性,使光束出现不符合物理意义的非光束行为。对于宽带光束,复振幅包络解的奇异点的位置距离轴中心较近,使复包络解不能正确表示脉冲光束,而对于奇异点位置远离轴中心的窄带光束,对脉冲光束产生的影响可以忽略。因此,窄带脉冲光束可以采用复振幅包络和复解析信号两种表示式来研究,而对
Resumo:
O objetivo deste trabalho é tratar da simulação do fenômeno de propagação de ondas em uma haste heterogênea elástico, composta por dois materiais distintos (um linear e um não-linear), cada um deles com a sua própria velocidade de propagação da onda. Na interface entre estes materiais existe uma descontinuidade, um choque estacionário, devido ao salto das propriedades físicas. Empregando uma abordagem na configuração de referência, um sistema não-linear hiperbólico de equações diferenciais parciais, cujas incógnitas são a velocidade e a deformação, descrevendo a resposta dinâmica da haste heterogénea. A solução analítica completa do problema de Riemann associado são apresentados e discutidos.
Resumo:
Neste trabalho apresentamos as etapas para a utilização do método da Programação Dinâmica, ou Princípio de Otimização de Bellman, para aplicações de controle ótimo. Investigamos a noção de funções de controle de Lyapunov (FCL) e sua relação com a estabilidade de sistemas autônomos com controle. Uma função de controle de Lyapunov deverá satisfazer a equação de Hamilton-Jacobi-Bellman (H-J-B). Usando esse fato, se uma função de controle de Lyapunov é conhecida, será então possível determinar a lei de realimentação ótima; isto é, a lei de controle que torna o sistema globalmente assintóticamente controlável a um estado de equilíbrio. Como aplicação, apresentamos uma modelagem matemática adequada a um problema de controle ótimo de certos sistemas biológicos. Este trabalho conta também com um breve histórico sobre o desenvolvimento da Teoria de Controle de forma a ilustrar a importância, o progresso e a aplicação das técnicas de controle em diferentes áreas ao longo do tempo.
Resumo:
Em uma grande gama de problemas físicos, governados por equações diferenciais, muitas vezes é de interesse obter-se soluções para o regime transiente e, portanto, deve-se empregar técnicas de integração temporal. Uma primeira possibilidade seria a de aplicar-se métodos explícitos, devido à sua simplicidade e eficiência computacional. Entretanto, esses métodos frequentemente são somente condicionalmente estáveis e estão sujeitos a severas restrições na escolha do passo no tempo. Para problemas advectivos, governados por equações hiperbólicas, esta restrição é conhecida como a condição de Courant-Friedrichs-Lewy (CFL). Quando temse a necessidade de obter soluções numéricas para grandes períodos de tempo, ou quando o custo computacional a cada passo é elevado, esta condição torna-se um empecilho. A fim de contornar esta restrição, métodos implícitos, que são geralmente incondicionalmente estáveis, são utilizados. Neste trabalho, foram aplicadas algumas formulações implícitas para a integração temporal no método Smoothed Particle Hydrodynamics (SPH) de modo a possibilitar o uso de maiores incrementos de tempo e uma forte estabilidade no processo de marcha temporal. Devido ao alto custo computacional exigido pela busca das partículas a cada passo no tempo, esta implementação só será viável se forem aplicados algoritmos eficientes para o tipo de estrutura matricial considerada, tais como os métodos do subespaço de Krylov. Portanto, fez-se um estudo para a escolha apropriada dos métodos que mais se adequavam a este problema, sendo os escolhidos os métodos Bi-Conjugate Gradient (BiCG), o Bi-Conjugate Gradient Stabilized (BiCGSTAB) e o Quasi-Minimal Residual (QMR). Alguns problemas testes foram utilizados a fim de validar as soluções numéricas obtidas com a versão implícita do método SPH.
Resumo:
O processo de recuperação secundária de petróleo é comumente realizado com a injeção de água ou gás no reservatório a fim de manter a pressão necessária para sua extração. Para que o investimento seja viável, os gastos com a extração precisam ser menores do que o retorno financeiro obtido com a produção de petróleo. Objetivando-se estudar possíveis cenários para o processo de exploração, costuma-se utilizar simulações dos processos de extração. As equações que modelam esse processo de recuperação são de caráter hiperbólico e não lineares, as quais podem ser interpretadas como Leis de Conservação, cujas soluções são complexas por suas naturezas descontínuas. Essas descontinuidades ou saltos são conhecidas como ondas de choque. Neste trabalho foi abordada uma análise matemática para os fenômenos oriundos de leis de conservação, para em seguida utilizá-la no entendimento do referido problema. Foram estudadas soluções fracas que, fisicamente, podem ser interpretadas como ondas de choque ou rarefação, então, para que fossem distinguidas as fisicamente admissíveis, foi considerado o princípio de entropia, nas suas diversas formas. As simulações foram realizadas nos âmbitos dos escoamentos bifásicos e trifásicos, em que os fluidos são imiscíveis e os efeitos gravitacionais e difusivos, devido à pressão capilar, foram desprezados. Inicialmente, foi feito um estudo comparativo de resoluções numéricas na captura de ondas de choque em escoamento bifásico água-óleo. Neste estudo destacam-se o método Composto LWLF-k, o esquema NonStandard e a introdução da nova função de renormalização para o esquema NonStandard, onde obteve resultados satisfatórios, principalmente em regiões onde a viscosidade do óleo é muito maior do que a viscosidade da água. No escoamento bidimensional, um novo método é proposto, partindo de uma generalização do esquema NonStandard unidimensional. Por fim, é feita uma adaptação dos métodos LWLF-4 e NonStandard para a simulação em escoamentos trifásicos em domínios unidimensional. O esquema NonStandard foi considerado mais eficiente nos problemas abordados, uma vez que sua versão bidimensional mostrou-se satisfatória na captura de ondas de choque em escoamentos bifásicos em meios porosos.