934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s
Resumo:
There exist two central measures of turbulent mixing in turbulent stratified fluids that are both caused by molecular diffusion: 1) the dissipation rate D(APE) of available potential energy APE; 2) the turbulent rate of change Wr, turbulent of background gravitational potential energy GPEr. So far, these two quantities have often been regarded as the same energy conversion, namely the irreversible conversion of APE into GPEr, owing to the well known exact equality D(APE)=Wr, turbulent for a Boussinesq fluid with a linear equation of state. Recently, however, Tailleux (2009) pointed out that the above equality no longer holds for a thermally-stratified compressible, with the ratio ξ=Wr, turbulent/D(APE) being generally lower than unity and sometimes even negative for water or seawater, and argued that D(APE) and Wr, turbulent actually represent two distinct types of energy conversion, respectively the dissipation of APE into one particular subcomponent of internal energy called the "dead" internal energy IE0, and the conversion between GPEr and a different subcomponent of internal energy called "exergy" IEexergy. In this paper, the behaviour of the ratio ξ is examined for different stratifications having all the same buoyancy frequency N vertical profile, but different vertical profiles of the parameter Υ=α P/(ρCp), where α is the thermal expansion coefficient, P the hydrostatic pressure, ρ the density, and Cp the specific heat capacity at constant pressure, the equation of state being that for seawater for different particular constant values of salinity. It is found that ξ and Wr, turbulent depend critically on the sign and magnitude of dΥ/dz, in contrast with D(APE), which appears largely unaffected by the latter. These results have important consequences for how the mixing efficiency should be defined and measured in practice, which are discussed.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
We describe, and make publicly available, two problem instance generators for a multiobjective version of the well-known quadratic assignment problem (QAP). The generators allow a number of instance parameters to be set, including those controlling epistasis and inter-objective correlations. Based on these generators, several initial test suites are provided and described. For each test instance we measure some global properties and, for the smallest ones, make some initial observations of the Pareto optimal sets/fronts. Our purpose in providing these tools is to facilitate the ongoing study of problem structure in multiobjective (combinatorial) optimization, and its effects on search landscape and algorithm performance.
Resumo:
We study boundary value problems posed in a semistrip for the elliptic sine-Gordon equation, which is the paradigm of an elliptic integrable PDE in two variables. We use the method introduced by one of the authors, which provides a substantial generalization of the inverse scattering transform and can be used for the analysis of boundary as opposed to initial-value problems. We first express the solution in terms of a 2 by 2 matrix Riemann-Hilbert problem whose \jump matrix" depends on both the Dirichlet and the Neumann boundary values. For a well posed problem one of these boundary values is an unknown function. This unknown function is characterised in terms of the so-called global relation, but in general this characterisation is nonlinear. We then concentrate on the case that the prescribed boundary conditions are zero along the unbounded sides of a semistrip and constant along the bounded side. This corresponds to a case of the so-called linearisable boundary conditions, however a major difficulty for this problem is the existence of non-integrable singularities of the function q_y at the two corners of the semistrip; these singularities are generated by the discontinuities of the boundary condition at these corners. Motivated by the recent solution of the analogous problem for the modified Helmholtz equation, we introduce an appropriate regularisation which overcomes this difficulty. Furthermore, by mapping the basic Riemann-Hilbert problem to an equivalent modified Riemann-Hilbert problem, we show that the solution can be expressed in terms of a 2 by 2 matrix Riemann-Hilbert problem whose jump matrix depends explicitly on the width of the semistrip L, on the constant value d of the solution along the bounded side, and on the residues at the given poles of a certain spectral function denoted by h. The determination of the function h remains open.
Resumo:
The UK Government is committed to all new homes being zero-carbon from 2016. The use of low and zero carbon (LZC) technologies is recognised by housing developers as being a key part of the solution to deliver against this zero-carbon target. The paper takes as its starting point that the selection of new technologies by firms is not a phenomenon which takes place within a rigid sphere of technical rationality (for example, Rip and Kemp, 1998). Rather, technology forms and diffusion trajectories are driven and shaped by myriad socio-technical structures, interests and logics. A literature review is offered to contribute to a more critical and systemic foundation for understanding the socio-technical features of the selection of LZC technologies in new housing. The problem is investigated through a multidisciplinary lens consisting of two perspectives: technological and institutional. The synthesis of the perspectives crystallises the need to understand that the selection of LZC technologies by housing developers is not solely dependent on technical or economic efficiency, but on the emergent ‘fit’ between the intrinsic properties of the technologies, institutional logics and the interests and beliefs of various actors in the housing development process.
Resumo:
Our differences are three. The first arises from the belief that "... a nonzero value for the optimally chosen policy instrument implies that the instrument is efficient for redistribution" (Alston, Smith, and Vercammen, p. 543, paragraph 3). Consider the two equations: (1) o* = f(P3) and (2) = -f(3) ++r h* (a, P3) representing the solution to the problem of maximizing weighted, Marshallian surplus using, simultaneously, a per-unit border intervention, 9, and a per-unit domestic intervention, wr. In the solution, parameter ot denotes the weight applied to producer surplus; parameter p denotes the weight applied to government revenues; consumer surplus is implicitly weighted one; and the country in question is small in the sense that it is unable to affect world price by any of its domestic adjustments (see the Appendix). Details of the forms of the functions f((P) and h(ot, p) are easily derived, but what matters in the context of Alston, Smith, and Vercammen's Comment is: Redistributivep referencest hatf avorp roducers are consistent with higher values "alpha," and whereas the optimal domestic intervention, 7r*, has both "alpha and beta effects," the optimal border intervention, r*, has only a "beta effect,"-it does not have a redistributional role. Garth Holloway is reader in agricultural economics and statistics, Department of Agricultural and Food Economics, School of Agriculture, Policy, and Development, University of Reading. The author is very grateful to Xavier Irz, Bhavani Shankar, Chittur Srinivasan, Colin Thirtle, and Richard Tiffin for their comments and their wisdom; and to Mario Mazzochi, Marinos Tsigas, and Cal Turvey for their scholarship, including help in tracking down a fairly complete collection of the papers that cite Alston and Hurd. They are not responsible for any errors or omissions. Note, in equation (1), that the border intervention is positive whenever a distortion exists because 8 > 0 implies 3 - 1 + 8 > 1 and, thus, f((P) > 0 (see Appendix). Using Alston, Smith, and Vercammen's definition, the instrument is now "efficient," and therefore has a redistributive role. But now, suppose that the distortion is removed so that 3 - 1 + 8 = 1, 8 = 0, and consequently the border intervention is zero. According to Alston, Smith, and Vercammen, the instrument is now "inefficient" and has no redistributive role. The reader will note that this thought experiment has said nothing about supporting farm incomes, and so has nothing whatsoever to do with efficient redistribution. Of course, the definition is false. It follows that a domestic distortion arising from the "excess-burden argument" 3 = 1 + 8, 8 > 0 does not make an export subsidy "efficient." The export subsidy, having only a "beta effect," does not have a redistributional role. The second disagreement emerges from the comment that Holloway "... uses an idiosyncratic definition of the relevant objective function of the government (Alston, Smith, and Vercammen, p. 543, paragraph 2)." The objective function that generates equations (1) and (2) (see the Appendix) is the same as the objective function used by Gardner (1995) when he first questioned Alston, Carter, and Smith's claim that a "domestic distortion can make a border intervention efficient in transferring surplus from consumers and taxpayers to farmers." The objective function used by Gardner (1995) is the same objective function used in the contributions that precede it and thus defines the literature on the debate about borderversus- domestic intervention (Streeten; Yeh; Paarlberg 1984, 1985; Orden; Gardner 1985). The objective function in the latter literature is the same as the one implied in another literature that originates from Wallace and includes most notably Gardner (1983), but also Alston and Hurd. Amer. J. Agr. Econ. 86(2) (May 2004): 549-552 Copyright 2004 American Agricultural Economics Association This content downloaded on Tue, 15 Jan 2013 07:58:41 AM All use subject to JSTOR Terms and Conditions 550 May 2004 Amer. J. Agr. Econ. The objective function in Holloway is this same objective function-it is, of course, Marshallian surplus.1 The third disagreement concerns scholarship. The Comment does not seem to be cognizant of several important papers, especially Bhagwati and Ramaswami, and Bhagwati, both of which precede Corden (1974, 1997); but also Lipsey and Lancaster, and Moschini and Sckokai; one important aspect of Alston and Hurd; and one extremely important result in Holloway. This oversight has some unfortunate repercussions. First, it misdirects to the wrong origins of intellectual property. Second, it misleads about the appropriateness of some welfare calculations. Third, it prevents Alston, Smith, and Vercammen from linking a finding in Holloway (pp. 242-43) with an old theorem (Lipsey and Lancaster) that settles the controversy (Alston, Carter, and Smith 1993, 1995; Gardner 1995; and, presently, Alston, Smith, and Vercammen) about the efficiency of border intervention in the presence of domestic distortions.
Resumo:
The phase shift full bridge (PSFB) converter allows high efficiency power conversion at high frequencies through zero voltage switching (ZVS); the parasitic drain-to-source capacitance of the MOSFET is discharged by a resonant inductance before the switch is gated resulting in near zero turn-on switching losses. Typically, an extra inductance is added to the leakage inductance of a transformer to form the resonant inductance necessary to charge and discharge the parasitic capacitances of the PSFB converter. However, many PSFB models do not consider the effects of the magnetizing inductance or dead-time in selecting the resonant inductance required to achieve ZVS. The choice of resonant inductance is crucial to the ZVS operation of the PSFB converter. Incorrectly sized resonant inductance will not achieve ZVS or will limit the load regulation ability of the converter. This paper presents a unique and accurate equation for calculating the resonant inductance required to achieve ZVS over a wide load range incorporating the effects of the magnetizing inductance and dead-time. The derived equations are validated against PSPICE simulations of a PSFB converter and extensive hardware experimentations.
Resumo:
We prove unique existence of solution for the impedance (or third) boundary value problem for the Helmholtz equation in a half-plane with arbitrary L∞ boundary data. This problem is of interest as a model of outdoor sound propagation over inhomogeneous flat terrain and as a model of rough surface scattering. To formulate the problem and prove uniqueness of solution we introduce a novel radiation condition, a generalization of that used in plane wave scattering by one-dimensional diffraction gratings. To prove existence of solution and a limiting absorption principle we first reformulate the problem as an equivalent second kind boundary integral equation to which we apply a form of Fredholm alternative, utilizing recent results on the solvability of integral equations on the real line in [5].
Resumo:
We consider the Dirichlet boundary value problem for the Helmholtz equation in a non-locally perturbed half-plane, this problem arising in electromagnetic scattering by one-dimensional rough, perfectly conducting surfaces. We propose a new boundary integral equation formulation for this problem, utilizing the Green's function for an impedance half-plane in place of the standard fundamental solution. We show, at least for surfaces not differing too much from the flat boundary, that the integral equation is uniquely solvable in the space of bounded and continuous functions, and hence that, for a variety of incident fields including an incident plane wave, the boundary value problem for the scattered field has a unique solution satisfying the limiting absorption principle. Finally, a result of continuous dependence of the solution on the boundary shape is obtained.
Resumo:
The usual variational (or weak) formulations of the Helmholtz equation are sign-indefinite in the sense that the bilinear forms cannot be bounded below by a positive multiple of the appropriate norm squared. This is often for a good reason, since in bounded domains under certain boundary conditions the solution of the Helmholtz equation is not unique at wavenumbers that correspond to eigenvalues of the Laplacian, and thus the variational problem cannot be sign-definite. However, even in cases where the solution is unique for all wavenumbers, the standard variational formulations of the Helmholtz equation are still indefinite when the wavenumber is large. This indefiniteness has implications for both the analysis and the practical implementation of finite element methods. In this paper we introduce new sign-definite (also called coercive or elliptic) formulations of the Helmholtz equation posed in either the interior of a star-shaped domain with impedance boundary conditions, or the exterior of a star-shaped domain with Dirichlet boundary conditions. Like the standard variational formulations, these new formulations arise just by multiplying the Helmholtz equation by particular test functions and integrating by parts.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
The primary objective of this research study is to determine which form of testing, the PEST algorithm or an operator-controlled condition is most accurate and time efficient for administration of the gaze stabilization test