869 resultados para Hyperbolic Boundary-Value Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of complex boundary conditions on flows are represented by a volume force in the immersed boundary methods. The problem with this representation is that the volume force exhibits non-physical oscillations in moving boundary simulations. A smoothing technique for discrete delta functions has been developed in this paper to suppress the non-physical oscillations in the volume forces. We have found that the non-physical oscillations are mainly due to the fact that the derivatives of the regular discrete delta functions do not satisfy certain moment conditions. It has been shown that the smoothed discrete delta functions constructed in this paper have one-order higher derivative than the regular ones. Moreover, not only the smoothed discrete delta functions satisfy the first two discrete moment conditions, but also their derivatives satisfy one-order higher moment condition than the regular ones. The smoothed discrete delta functions are tested by three test cases: a one-dimensional heat equation with a moving singular force, a two-dimensional flow past an oscillating cylinder, and the vortex-induced vibration of a cylinder. The numerical examples in these cases demonstrate that the smoothed discrete delta functions can effectively suppress the non-physical oscillations in the volume forces and improve the accuracy of the immersed boundary method with direct forcing in moving boundary simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a sphere immersed in a rarefied monatomic gas with zero mean flow. The distribution function of the molecules at infinity is chosen to be a Maxwellian. The boundary condition at the body is diffuse reflection with perfect accommodation to the surface temperature. The microscopic flow of particles about the sphere is modeled kinetically by the Boltzmann equation with the Krook collision term. Appropriate normalizations in the near and far fields lead to a perturbation solution of the problem, expanded in terms of the ratio of body diameter to mean free path (inverse Knudsen number). The distribution function is found directly in each region, and intermediate matching is demonstrated. The heat transfer from the sphere is then calculated as an integral over this distribution function in the inner region. Final results indicate that the heat transfer may at first increase over its free flow value before falling to the continuum level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The important features of the two-dimensional incompressible turbulent flow over a wavy surface of wavelength comparable with the boundary layer thickness are analyzed.

A turbulent field method using model equation for turbulent shear stress similar to the scheme of Bradshaw, Ferriss and Atwell (1967) is employed with suitable modification to cover the viscous sublayer. The governing differential equations are linearized based on the small but finite amplitude to wavelength ratio. An orthogonal wavy coordinate system, accurate to the second order in the amplitude ratio, is adopted to avoid the severe restriction to the validity of linearization due to the large mean velocity gradient near the wall. Analytic solution up to the second order is obtained by using the method of matched-asymptotic-expansion based on the large Reynolds number and hence the small skin friction coefficient.

In the outer part of the layer, the perturbed flow is practically "inviscid." Solutions for the velocity, Reynolds stress and also the wall pressure distributions agree well with the experimental measurement. In the wall region where the perturbed Reynolds stress plays an important role in the process of momentum transport, only a qualitative agreement is obtained. The results also show that the nonlinear second-order effect is negligible for amplitude ratio of 0.03. The discrepancies in the detailed structure of the velocity, shear stress, and skin friction distributions near the wall suggest modifications to the model are required to describe the present problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three different categories of flow problems of a fluid containing small particles are being considered here. They are: (i) a fluid containing small, non-reacting particles (Parts I and II); (ii) a fluid containing reacting particles (Parts III and IV); and (iii) a fluid containing particles of two distinct sizes with collisions between two groups of particles (Part V).

Part I

A numerical solution is obtained for a fluid containing small particles flowing over an infinite disc rotating at a constant angular velocity. It is a boundary layer type flow, and the boundary layer thickness for the mixture is estimated. For large Reynolds number, the solution suggests the boundary layer approximation of a fluid-particle mixture by assuming W = Wp. The error introduced is consistent with the Prandtl’s boundary layer approximation. Outside the boundary layer, the flow field has to satisfy the “inviscid equation” in which the viscous stress terms are absent while the drag force between the particle cloud and the fluid is still important. Increase of particle concentration reduces the boundary layer thickness and the amount of mixture being transported outwardly is reduced. A new parameter, β = 1/Ω τv, is introduced which is also proportional to μ. The secondary flow of the particle cloud depends very much on β. For small values of β, the particle cloud velocity attains its maximum value on the surface of the disc, and for infinitely large values of β, both the radial and axial particle velocity components vanish on the surface of the disc.

Part II

The “inviscid” equation for a gas-particle mixture is linearized to describe the flow over a wavy wall. Corresponding to the Prandtl-Glauert equation for pure gas, a fourth order partial differential equation in terms of the velocity potential ϕ is obtained for the mixture. The solution is obtained for the flow over a periodic wavy wall. For equilibrium flows where λv and λT approach zero and frozen flows in which λv and λT become infinitely large, the flow problem is basically similar to that obtained by Ackeret for a pure gas. For finite values of λv and λT, all quantities except v are not in phase with the wavy wall. Thus the drag coefficient CD is present even in the subsonic case, and similarly, all quantities decay exponentially for supersonic flows. The phase shift and the attenuation factor increase for increasing particle concentration.

Part III

Using the boundary layer approximation, the initial development of the combustion zone between the laminar mixing of two parallel streams of oxidizing agent and small, solid, combustible particles suspended in an inert gas is investigated. For the special case when the two streams are moving at the same speed, a Green’s function exists for the differential equations describing first order gas temperature and oxidizer concentration. Solutions in terms of error functions and exponential integrals are obtained. Reactions occur within a relatively thin region of the order of λD. Thus, it seems advantageous in the general study of two-dimensional laminar flame problems to introduce a chemical boundary layer of thickness λD within which reactions take place. Outside this chemical boundary layer, the flow field corresponds to the ordinary fluid dynamics without chemical reaction.

Part IV

The shock wave structure in a condensing medium of small liquid droplets suspended in a homogeneous gas-vapor mixture consists of the conventional compressive wave followed by a relaxation region in which the particle cloud and gas mixture attain momentum and thermal equilibrium. Immediately following the compressive wave, the partial pressure corresponding to the vapor concentration in the gas mixture is higher than the vapor pressure of the liquid droplets and condensation sets in. Farther downstream of the shock, evaporation appears when the particle temperature is raised by the hot surrounding gas mixture. The thickness of the condensation region depends very much on the latent heat. For relatively high latent heat, the condensation zone is small compared with ɅD.

For solid particles suspended initially in an inert gas, the relaxation zone immediately following the compression wave consists of a region where the particle temperature is first being raised to its melting point. When the particles are totally melted as the particle temperature is further increased, evaporation of the particles also plays a role.

The equilibrium condition downstream of the shock can be calculated and is independent of the model of the particle-gas mixture interaction.

Part V

For a gas containing particles of two distinct sizes and satisfying certain conditions, momentum transfer due to collisions between the two groups of particles can be taken into consideration using the classical elastic spherical ball model. Both in the relatively simple problem of normal shock wave and the perturbation solutions for the nozzle flow, the transfer of momentum due to collisions which decreases the velocity difference between the two groups of particles is clearly demonstrated. The difference in temperature as compared with the collisionless case is quite negligible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.

The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.

The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.

The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.

If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho apresenta uma estimativa a priori para o limite superior da distribuição de temperatura considerando um problema em regime permanente em um corpo com uma condutividade térmica dependente da temperatura. A discussão é realizada supondo que as condições de contorno são lineares (lei de Newton do resfriamento) e que a condutividade térmica é constante por partes (quando considerada como uma função da temperatura). Estas estimativas consistem em uma ferramenta poderosa que pode prescindir da necessidade de uma simulação numérica cara de um problema de transferência de calor não linear, sempre que for suficiente conhecer o valor mais alto de temperatura. Nestes casos, a metodologia proposta neste trabalho é mais eficaz do que as aproximações usuais que assumem tanto a condutividade térmica quanto as fontes de calor como constantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytical solution is presented for the vertical consolidation of a cylindrical annulus of clay with horizontal drainage occurring to concentric internal and external drainage boundaries. Numerical results are given for various ratios of internal and external radii and it is shown that solutions for conventional one-dimensional consolidation, and for consolidation of a cylindrical block of clay with drainage only to the outer cylindrical boundary form extremes to the analysis presented here. An application of the solution to the estimation of horizontal permeability of clay is briefly described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When two rough surfaces are loaded together it is well known that the area of true contact is very much smaller then the geometric area and that, consequently, local contact pressures are very much greater than the nominal value. If the asperities on each surface can be thought of as possessing smooth summits and each of the solids is elastically isotropic then the pressure distribution will consist of a series of small, but severe, Hertzian patches. However, if one of both of the surfaces in question is protected by a boundary layer then both the number and dimensions of these patches, and the form of the pressure distribution within them, will be modified. Recent experimental evidence from studies using both Atomic Force Microscopy and micro-tribometry suggests that boundary films produced by the action of commercial anti-wear additives, such as ZDTP, exhibit mechanical properties, which are affected by local values of pressure. These changes bring about further modifications to local conditions. These effects have been explored in a numerical model of rough surface contact and the implications for the mechanisms of surface distress and wear are discussed. © 2000 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex, fragmented and diverse aspects of a sustainable development perspective are translated into an eight-point framework that defines a problem boundary larger than that traditionally adopted by civil engineers. This leads to practical questions intended to inform engineers who ask 'am I being sustainable?' during project implementation. The value of the questions is tested against a case history of a wastewater treatment project. This demonstrates the relevance of the questions to successive project delivery phases of defining the problem, choosing a solution and implementing that solution through design, construction and operation. The case history highlights that answers to several of the additional questions raised by considering this wider problem space are currently buried within government and clients' policies, regulations and standard practice; these answers may not be accessible to the professional engineer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although partially observable Markov decision processes (POMDPs) have shown great promise as a framework for dialog management in spoken dialog systems, important scalability issues remain. This paper tackles the problem of scaling slot-filling POMDP-based dialog managers to many slots with a novel technique called composite point-based value iteration (CSPBVI). CSPBVI creates a "local" POMDP policy for each slot; at runtime, each slot nominates an action and a heuristic chooses which action to take. Experiments in dialog simulation show that CSPBVI successfully scales POMDP-based dialog managers without compromising performance gains over baseline techniques and preserving robustness to errors in user model estimation. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is increasing adoption of computer-based tools to support the product development process. Tolls include computer-aided design, computer-aided manufacture, systems engineering and product data management systems. The fact that companies choose to invest in tools might be regarded as evidence that tools, in aggregate, are perceived to possess business value through their application to engineering activities. Yet the ways in which value accrues from tool technology are poorly understood.

This report records the proceedings of an international workshop during which some novel approaches to improving our understanding of this problem of tool valuation were presented and debated. The value of methods and processes were also discussed. The workshop brought together British, Dutch, German and Italian researchers. The presenters included speakers from industry and academia (the University of Cambridge, the University of Magdeburg and the Politechnico de Torino)

The work presented showed great variety. Research methods include case studies, questionnaires, statistical analysis, semi-structured interviews, deduction, inductive reasoning, the recording of anecdotes and analogies. The presentations drew on financial investment theory, the industrial experience of workshop participants, discussions with students developing tools, modern economic theories and speculation on the effects of company capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

in this contribution we discuss a stochastic framework for air traffic conflict resolution. The conflict resolution task is posed as the problem of optimizing an expected value criterion. Optimization is carried out by Monte Carlo Markov Chain (MCMC) simulation. A numerical example illustrates the proposed strategy. Copyright © 2005 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. Aim. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. Methods. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). Results: The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. Conclusions: A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection. © 2011 Jun et al.