82 resultados para Constrained Riemann problem

em University of Queensland eSpace - Australia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed. © 2004 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A chance constrained programming model is developed to assist Queensland barley growers make varietal and agronomic decisions in the face of changing product demands and volatile production conditions. Unsuitable or overlooked in many risk programming applications, the chance constrained programming approach nonetheless aptly captures the single-stage decision problem faced by barley growers of whether to plant lower-yielding but potentially higher-priced malting varieties, given a particular expectation of meeting malting grade standards. Different expectations greatly affect the optimal mix of malting and feed barley activities. The analysis highlights the suitability of chance constrained programming to this specific class of farm decision problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major problem in de novo design of enzyme inhibitors is the unpredictability of the induced fit, with the shape of both ligand and enzyme changing cooperatively and unpredictably in response to subtle structural changes within a ligand. We have investigated the possibility of dampening the induced fit by using a constrained template as a replacement for adjoining segments of a ligand. The template preorganizes the ligand structure, thereby organizing the local enzyme environment. To test this approach, we used templates consisting of constrained cyclic tripeptides, formed through side chain to main chain linkages, as structural mimics of the protease-bound extended beta-strand conformation of three adjoining amino acid residues at the N- or C-terminal sides of the scissile bond of substrates. The macrocyclic templates were derivatized to a range of 30 structurally diverse molecules via focused combinatorial variation of nonpeptidic appendages incorporating a hydroxyethylamine transition-state isostere. Most compounds in the library were potent inhibitors of the test protease (HIV-1 protease). Comparison of crystal structures for five protease-inhibitor complexes containing an N-terminal macrocycle and three protease-inhibitor complexes containing a C-terminal macrocycle establishes that the macrocycles fix their surrounding enzyme environment, thereby permitting independent variation of acyclic inhibitor components with only local disturbances to the protease. In this way, the location in the protease of various acyclic fragments on either side of the macrocyclic template can be accurately predicted. This type of templating strategy minimizes the problem of induced fit, reducing unpredictable cooperative effects in one inhibitor region caused by changes to adjacent enzyme-inhibitor interactions. This idea might be exploited in template-based approaches to inhibitors of other proteases, where a beta-strand mimetic is also required for recognition, and also other protein-binding ligands where different templates may be more appropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present existence results for a Neumann problem involving critical Sobolev nonlinearities both on the right hand side of the equation and at the boundary condition.. Positive solutions are obtained through constrained minimization on the Nehari manifold. Our approach is based on the concentration 'compactness principle of P. L. Lions and M. Struwe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many online applications, we need to maintain quantile statistics for a sliding window on a data stream. The sliding windows in natural form are defined as the most recent N data items. In this paper, we study the problem of estimating quantiles over other types of sliding windows. We present a uniform framework to process quantile queries for time constrained and filter based sliding windows. Our algorithm makes one pass on the data stream and maintains an E-approximate summary. It uses O((1)/(epsilon2) log(2) epsilonN) space where N is the number of data items in the window. We extend this framework to further process generalized constrained sliding window queries and proved that our technique is applicable for flexible window settings. Our performance study indicates that the space required in practice is much less than the given theoretical bound and the algorithm supports high speed data streams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the effect of the coefficient of the critical nonlinearity for the Neumann problem on the existence of least energy solutions. As a by-product we establish a Sobolev inequality with interior norm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The received view of an ad hoc hypothesis is that it accounts for only the observation(s) it was designed to account for, and so non-adhocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be unsuccessful or of doubtful value, and familiar and firmer criteria for evaluating the hypotheses or modified theories so classified are characteristically available. These points are obscured largely because the received view fails to adequately separate psychology from methodology or to recognise ambiguities in the use of 'ad hoc'.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Watkins proposes a neo-Popperian solution to the pragmatic problem of induction. He asserts that evidence can be used non-inductively to prefer the principle that corroboration is more successful over all human history than that, say, counter-corroboration is more successful either over this same period or in the future. Watkins's argument for rejecting the first counter-corroborationist alternative is beside the point. However, as whatever is the best strategy over all human history is irrelevant to the pragmatic problem of induction since we are not required to act in the past, and his argument for rejecting the second presupposes induction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the solvability of the Neumann problem (1.1) involving a critical Sobolev exponent. In the first part of this work it is assumed that the coeffcients Q and h are at least continuous. Moreover Q is positive on overline Omega and lambda > 0 is a parameter. We examine the common effect of the mean curvature and the shape of the graphs of the coeffcients Q and h on the existence of low energy solutions. In the second part of this work we consider the same problem with Q replaced by - Q. In this case the problem can be supercritical and the existence results depend on integrability conditions on Q and h.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper will examine attitudes to eclectic stylistic borrowing in Japan in the twentieth century in light of the concept of authenticity. I am particularly interested in how an earlier claim correlating European modernist and traditional Japanese architecture continues to colour conceptions about what is an 'authentic' response for Japanese architects to make to contemporary conditions. Non-Western and vernacular architectures generally have been the repository for touristic desires for regional authenticity and difference. Yet Japan's unique role in the development of modernist architecture has given a peculiar intensity to the demand for its architecture to resist a perceived postmodern decadence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heat transfer and entropy generation analysis of the thermally developing forced convection in a porous-saturated duct of rectangular cross-section, with walls maintained at a constant and uniform heat flux, is investigated based on the Brinkman flow model. The classical Galerkin method is used to obtain the fully developed velocity distribution. To solve the thermal energy equation, with the effects of viscous dissipation being included, the Extended Weighted Residuals Method (EWRM) is applied. The local (three dimensional) temperature field is solved by utilizing the Green’s function solution based on the EWRM where symbolic algebra is being used for convenience in presentation. Following the computation of the temperature field, expressions are presented for the local Nusselt number and the bulk temperature as a function of the dimensionless longitudinal coordinate, the aspect ratio, the Darcy number, the viscosity ratio, and the Brinkman number. With the velocity and temperature field being determined, the Second Law (of Thermodynamics) aspect of the problem is also investigated. Approximate closed form solutions are also presented for two limiting cases of MDa values. It is observed that decreasing the aspect ratio and MDa values increases the entropy generation rate.