94 resultados para Regularization scheme
em University of Queensland eSpace - Australia
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
The present fundamental knowledge of fluid turbulence has been established primarily from hot- and cold-wire measurements. Unfortunately, however, these measurements necessarily suffer from contamination by noise since no certain method has previously been available to optimally filter noise from the measured signals. This limitation has impeded our progress of understanding turbulence profoundly. We address this limitation by presenting a simple, fast-convergent iterative scheme to digitally filter signals optimally and find Kolmogorov scales definitely. The great efficacy of the scheme is demonstrated by its application to the instantaneous velocity measured in a turbulent jet.
Resumo:
This paper explores the feasibility of adopting an integrated economic approach to raise farmers’ tolerance of the presence of elephants on their farming lands. Responses to this approach were sought from a sample of farmers in the areas affected by human elephant conflict in the northwestern province of Sri Lanka. Results from a contingent valuation survey of their willingness to pay for a scheme to conserve elephants are also reported. Two separate logit regression analyses were undertaken to examine the factors that influence the farmers’ responses for the payment principle question and their opinions on the integrated economic approach. Although found that the majority of the respondents expressed their willingness to pay for the proposed scheme and supported for the implementation of the integrated approach, we have insufficient data yet to determine if their support and financial contribution would be sufficient to set up this programme and also to predict its economic viability. Nevertheless, the overall finding of this study provides an improved economic assessment of the farmers’ attitudes towards the wild elephant in Sri Lanka. At the same time the study shows that, contrary to commonly held assumptions, farmers in this developing country, do support wildlife conservation.
Resumo:
Malondialdehyde and acetaldehyde react together with proteins and form hybrid protein conjugates designated as MAA adducts, which have been detected in livers of ethanol-fed animals. Our previous studies have shown that MAA adducts are comprised of two distinct products. One adduct is composed of two molecules of malondialdehyde and one molecule of acetaldehyde and was identified as the 4-methpl-1,4-dihydropyridine-3,5-dicarbaldehyde derivative of an amino group (MHHDC adduct). The other adduct is a 1:1 adduct of malondialdehyde and acetaldehyde and was identified as the 2-formyl-3-(alkylamino)butanal derivative of an amino group (FAAB adduct). In this study, information on the mechanism of MAA adduct formation was obtained, focusing on whether the FAAB adduct serves as a precursor for the MDHDC adduct. Upon the basis of chemical analysis and NMR spectroscopy, two initial reaction steps appear to be a prerequisite for MDHDC formation. One step involves the reaction of one molecule of malondialdehyde and one of acetaldehyde with an amino group of a protein to form the FAAB product, while the other step involves the generation of a malondialdehyde-enamine. It appears that generation of the MDHDC adduct requires the FAAB moiety to be transferred to the nitrogen of the MDA-enamine. For efficient reaction of FAAB with the enamine to take place, additional experiments indicated that these two intermediates likely must be in positions on the protein of close proximity to each other. Further studies showed that the incubation of liver proteins from ethanol-fed rats with MDA resulted in a marked generation of MDHDC adducts, indicating the presence of a pool of FAAB adducts in the liver of ethanol-fed animals. Overall, these findings show that MDHDC-protein adduct formation occurs via the reaction of the FAAB moiety with a malondialdehyde-enamine, and further suggest that a similar mechanism may be operative in vivo in the liver during prolonged ethanol consumption.
Resumo:
A major limitation in any high-performance digital communication system is the linearity region of the transmitting amplifier. Nonlinearities typically lead to signal clipping. Efficient communication in such conditions requires maintaining a low peak-to-average power ratio (PAR) in the transmitted signal while achieving a high throughput of data. Excessive PAR leads either to frequent clipping or to inadequate resolution in the analog-to-digital or digital-to-analog converters. Currently proposed signaling schemes for future generation wireless communications suffer from a high PAR. This paper presents a new signaling scheme for channels with clipping which achieves a PAR as low as 3. For a given linear range in the transmitter's digital-to-analog converter, this scheme achieves a lower bit-error rate than existing multicarrier schemes, owing to increased separation between constellation points. We present the theoretical basis for this new scheme, approximations for the expected bit-error rate, and simulation results. (C) 2002 Elsevier Science (USA).
Resumo:
Which gates are universal for quantum computation? Although it is well known that certain gates on two-level quantum systems (qubits), such as the controlled-NOT, are universal when assisted by arbitrary one-qubit gates, it has only recently become clear precisely what class of two-qubit gates is universal in this sense. We present an elementary proof that any entangling two-qubit gate is universal for quantum computation, when assisted by one-qubit gates. A proof of this result for systems of arbitrary finite dimension has been provided by Brylinski and Brylinski; however, their proof relies on a long argument using advanced mathematics. In contrast, our proof provides a simple constructive procedure which is close to optimal and experimentally practical.
Resumo:
A new algebraic Bethe ansatz scheme is proposed to diagonalize classes of integrable models relevant to the description of Bose-Einstein condensation in dilute alkali gases. This is achieved by introducing the notion of Z-graded representations of the Yang-Baxter algebra. (C) 2003 American Institute of Physics.
Resumo:
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.