490 resultados para Hyperbolic haves
Resumo:
Direct numerical simulation of transition How over a blunt cone with a freestream Mach number of 6, Reynolds number of 10,000 based on the nose radius, and a 1-deg angle of attack is performed by using a seventh-order weighted essentially nonoscillatory scheme for the convection terms of the Navier-Stokes equations, together with an eighth-order central finite difference scheme for the viscous terms. The wall blow-and-suction perturbations, including random perturbation and multifrequency perturbation, are used to trigger the transition. The maximum amplitude of the wall-normal velocity disturbance is set to 1% of the freestream velocity. The obtained transition locations on the cone surface agree well with each other far both cases. Transition onset is located at about 500 times the nose radius in the leeward section and 750 times the nose radius in the windward section. The frequency spectrum of velocity and pressure fluctuations at different streamwise locations are analyzed and compared with the linear stability theory. The second-mode disturbance wave is deemed to be the dominating disturbance because the growth rate of the second mode is much higher than the first mode. The reason why transition in the leeward section occurs earlier than that in the windward section is analyzed. It is not because of higher local growth rate of disturbance waves in the leeward section, but because the growth start location of the dominating second-mode wave in the leeward section is much earlier than that in the windward section.
Resumo:
A numerical model for shallow-water equations has been built and tested on the Yin-Yang overset spherical grid. A high-order multimoment finite-volume method is used for the spatial discretization in which two kinds of so-called moments of the physical field [i.e., the volume integrated average ( VIA) and the point value (PV)] are treated as the model variables and updated separately in time. In the present model, the PV is computed by the semi-implicit semi-Lagrangian formulation, whereas the VIA is predicted in time via a flux-based finite-volume method and is numerically conserved on each component grid. The concept of including an extra moment (i.e., the volume-integrated value) to enforce the numerical conservativeness provides a general methodology and applies to the existing semi-implicit semi-Lagrangian formulations. Based on both VIA and PV, the high-order interpolation reconstruction can only be done over a single grid cell, which then minimizes the overlapping zone between the Yin and Yang components and effectively reduces the numerical errors introduced in the interpolation required to communicate the data between the two components. The present model completely gets around the singularity and grid convergence in the polar regions of the conventional longitude-latitude grid. Being an issue demanding further investigation, the high-order interpolation across the overlapping region of the Yin-Yang grid in the current model does not rigorously guarantee the numerical conservativeness. Nevertheless, these numerical tests show that the global conservation error in the present model is negligibly small. The model has competitive accuracy and efficiency.
Resumo:
By using characteristic analysis of the linear and nonlinear parabolic stability equations (PSE), PSE of primitive disturbance variables are proved to be parabolic intotal. By using sub-characteristic analysis of PSE, the linear PSE are proved to be elliptical and hyperbolic-parabolic for velocity U, in subsonic and supersonic, respectively; the nonlinear PSE are proved to be elliptical and hyperbolic-parabolic for relocity U + u in subsonic and supersonic, respectively. The methods are gained that the remained ellipticity is removed from the PSE by characteristic and sub-characteristic theories, the results for the linear PSE are consistent with the known results, and the influence of the Mach number is also given out. At the same time, the methods of removing the remained ellipticity are further obtained from the nonlinear PSE.
Resumo:
We discuss the transversal heteroclinic cycle formed by hyperbolic periodic pointes of diffeomorphism on the differential manifold. We point out that every possible kind of transversal heteroclinic cycle has the Smalehorse property and the unstable manifolds of hyperbolic periodic points have the closure relation mutually. Therefore the strange attractor may be the closure of unstable manifolds of a countable number of hyperbolic periodic points. The Henon mapping is used as an example to show that the conclusion is reasonable.
Resumo:
Introduction The strange chaotic attractor (ACS) is an important subject in the nonlinear field. On the basis of the theory of transversal heteroclinic cycles, it is suggested that the strange attractor is the closure of the unstable manifolds of countable infinite hyperbolic periodic points. From this point of view some nonlinear phenomena are explained reasonably.
Resumo:
Examined in this work is the anti-plane stress and strain near a crack in a material that softens beyond the elastic peak and unloads on a linear path through the initial state. The discontinuity in the constitutive relation is carried into the analysis such that one portion of the local solution is elliptic in character and the other hyperbolic. Material elements in one region may cross over to another as the loading is increased. Local unloading can thus prevail. Presented are the inhomogeneous character of the asymptotic stress and strain in the elliptic and hyperbolic region, in addition to the region in which the material elements had experienced unloading. No one single stress or strain coefficient would be adequate for describing crack instability.
Resumo:
Organismal survival in marine habitats is often positively correlated with habitat structural complexity at local (within-patch) spatial scales. Far less is known, however, about how marine habitat structure at the landscape scale influences predation and other ecological processes, and in particular, how these processes are dictated by the interactive effect of habitat structure at local and landscape scales. The relationship between survival and habitat structure can be modeled with the habitat-survival function (HSF), which often takes on linear, hyperbolic, or sigmoid forms. We used tethering experiments to determine how seagrass landscape structure influenced the HSF for juvenile blue crabs Callinectes sapidus Rathbun in Back Sound, North Carolina, USA. Crabs were tethered in artificial seagrass plots of 7 different shoot densities embedded within small (1 – 3 m2) or large (>100 m2) seagrass patches (October 1999), and within 10 × 10 m landscapes containing patchy (<50% cover) or continuous (>90% cover) seagrass (July 2000). Overall, crab survival was higher in small than in large patches, and was higher in patchy than in continuous seagrass. The HSF was hyperbolic in large patches and in continuous seagrass, indicating that at low levels of habitat structure, relatively small increases in structure resulted in substantial increases in juvenile blue crab survival. However, the HSF was linear in small seagrass patches in 1999 and was parabolic in patchy seagrass in 2000. A sigmoid HSF, in which a threshold level of seagrass structure is required for crab survival, was never observed. Patchy seagrass landscapes are valuable refuges for juvenile blue crabs, and the effects of seagrass structural complexity on crab survival can only be fully understood when habitat structure at larger scales is considered.
Resumo:
A new high-order finite volume method based on local reconstruction is presented in this paper. The method, so-called the multi-moment constrained finite volume (MCV) method, uses the point values defined within single cell at equally spaced points as the model variables (or unknowns). The time evolution equations used to update the unknowns are derived from a set of constraint conditions imposed on multi kinds of moments, i.e. the cell-averaged value and the point-wise value of the state variable and its derivatives. The finite volume constraint on the cell-average guarantees the numerical conservativeness of the method. Most constraint conditions are imposed on the cell boundaries, where the numerical flux and its derivatives are solved as general Riemann problems. A multi-moment constrained Lagrange interpolation reconstruction for the demanded order of accuracy is constructed over single cell and converts the evolution equations of the moments to those of the unknowns. The presented method provides a general framework to construct efficient schemes of high orders. The basic formulations for hyperbolic conservation laws in 1- and 2D structured grids are detailed with the numerical results of widely used benchmark tests. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
We report the formulation of an ABCD matrix for reflection and refraction of Gaussian light beams at the surface of a parabola of revolution that separate media of different refractive indices based on optical phase matching. The equations for the spot sizes and wave-front radii of the beams are also obtained by using the ABCD matrix. With these matrices, we can more conveniently design and evaluate some special optical systems, including these kinds of elements. (c) 2005 Optical Society of America
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
This investigation deals with certain generalizations of the classical uniqueness theorem for the second boundary-initial value problem in the linearized dynamical theory of not necessarily homogeneous nor isotropic elastic solids. First, the regularity assumptions underlying the foregoing theorem are relaxed by admitting stress fields with suitably restricted finite jump discontinuities. Such singularities are familiar from known solutions to dynamical elasticity problems involving discontinuous surface tractions or non-matching boundary and initial conditions. The proof of the appropriate uniqueness theorem given here rests on a generalization of the usual energy identity to the class of singular elastodynamic fields under consideration.
Following this extension of the conventional uniqueness theorem, we turn to a further relaxation of the customary smoothness hypotheses and allow the displacement field to be differentiable merely in a generalized sense, thereby admitting stress fields with square-integrable unbounded local singularities, such as those encountered in the presence of focusing of elastic waves. A statement of the traction problem applicable in these pathological circumstances necessitates the introduction of "weak solutions'' to the field equations that are accompanied by correspondingly weakened boundary and initial conditions. A uniqueness theorem pertaining to this weak formulation is then proved through an adaptation of an argument used by O. Ladyzhenskaya in connection with the first boundary-initial value problem for a second-order hyperbolic equation in a single dependent variable. Moreover, the second uniqueness theorem thus obtained contains, as a special case, a slight modification of the previously established uniqueness theorem covering solutions that exhibit only finite stress-discontinuities.
Resumo:
从近轴波动方程出发推导了窄带和宽带双曲正割脉冲光束的解析传输公式。通过数值计算比较了分别采用复振幅包络(CAE)表示式和复解析信号(CAS)表示式得到的脉冲光束,得出了选择脉冲光束研究方法的条件。结果表明复振幅包络表示式得到的解会存在空间奇异性,使光束出现不符合物理意义的非光束行为。对于宽带光束,复振幅包络解的奇异点的位置距离轴中心较近,使复包络解不能正确表示脉冲光束,而对于奇异点位置远离轴中心的窄带光束,对脉冲光束产生的影响可以忽略。因此,窄带脉冲光束可以采用复振幅包络和复解析信号两种表示式来研究,而对
Resumo:
O objetivo deste trabalho é tratar da simulação do fenômeno de propagação de ondas em uma haste heterogênea elástico, composta por dois materiais distintos (um linear e um não-linear), cada um deles com a sua própria velocidade de propagação da onda. Na interface entre estes materiais existe uma descontinuidade, um choque estacionário, devido ao salto das propriedades físicas. Empregando uma abordagem na configuração de referência, um sistema não-linear hiperbólico de equações diferenciais parciais, cujas incógnitas são a velocidade e a deformação, descrevendo a resposta dinâmica da haste heterogénea. A solução analítica completa do problema de Riemann associado são apresentados e discutidos.
Resumo:
Em uma grande gama de problemas físicos, governados por equações diferenciais, muitas vezes é de interesse obter-se soluções para o regime transiente e, portanto, deve-se empregar técnicas de integração temporal. Uma primeira possibilidade seria a de aplicar-se métodos explícitos, devido à sua simplicidade e eficiência computacional. Entretanto, esses métodos frequentemente são somente condicionalmente estáveis e estão sujeitos a severas restrições na escolha do passo no tempo. Para problemas advectivos, governados por equações hiperbólicas, esta restrição é conhecida como a condição de Courant-Friedrichs-Lewy (CFL). Quando temse a necessidade de obter soluções numéricas para grandes períodos de tempo, ou quando o custo computacional a cada passo é elevado, esta condição torna-se um empecilho. A fim de contornar esta restrição, métodos implícitos, que são geralmente incondicionalmente estáveis, são utilizados. Neste trabalho, foram aplicadas algumas formulações implícitas para a integração temporal no método Smoothed Particle Hydrodynamics (SPH) de modo a possibilitar o uso de maiores incrementos de tempo e uma forte estabilidade no processo de marcha temporal. Devido ao alto custo computacional exigido pela busca das partículas a cada passo no tempo, esta implementação só será viável se forem aplicados algoritmos eficientes para o tipo de estrutura matricial considerada, tais como os métodos do subespaço de Krylov. Portanto, fez-se um estudo para a escolha apropriada dos métodos que mais se adequavam a este problema, sendo os escolhidos os métodos Bi-Conjugate Gradient (BiCG), o Bi-Conjugate Gradient Stabilized (BiCGSTAB) e o Quasi-Minimal Residual (QMR). Alguns problemas testes foram utilizados a fim de validar as soluções numéricas obtidas com a versão implícita do método SPH.