128 resultados para Feedback length minimization problem
em University of Queensland eSpace - Australia
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
For all in greater than or equal to 3, the Oberwolfach problem is solved for the case where the 2-factors consist of two cycles of lengths in and m + 1, and for the case where the 2-factors consist of two cycles of lengths m and m + 2.
Resumo:
Purpose. To conduct a controlled trial of traditional and problem-based learning (PBL) methods of teaching epidemiology. Method. All second-year medical students (n = 136) at The University of Western Australia Medical School were offered the chance to participate in a randomized controlled trial of teaching methods fur an epidemiology course. Students who consented to participate (n = 80) were randomly assigned to either a PBL or a traditional course. Students who did not consent or did not return the consent form (n = 56) were assigned to the traditional course, Students in both streams took identical quizzes and exams. These scores, a collection of semi-quantitative feedback from all students, and a qualitative analysis of interviews with a convenience sample of six students from each stream were compared. Results. There was no significant difference in performances on quizzes or exams between PBL and traditional students. Students using PBL reported a stronger grasp of epidemiologic principles, enjoyed working with a group, and, at the end of the course, were more enthusiastic about epidemiology and its professional relevance to them than were students in the traditional course. PBL students worked more steadily during the semester but spent only marginally more time on the epidemiology course overall. Interviews corroborated these findings. Non-consenting students were older (p < 0.02) and more likely to come from non-English-speaking backgrounds (p < 0.005). Conclusions. PBL provides an academically equivalent but personally far richer learning experience. The adoption of PBL approaches to medical education makes it important to study whether PBL presents particular challenges for students whose first language is not the language of instruction.
Resumo:
This paper presents a new approach for the design of genuinely finite-length shim and gradient coils, intended for use in magnetic resonance imaging equipment. A cylindrical target region is located asymmetrically, at an arbitrary position within a coil of finite length. A desired target field is specified on the surface of that region, and a method is given that enables winding patterns on the surface of the coil to be designed, to produce the desired field at the inner target region. The method uses a minimization technique combined with regularization, to find the current density on the surface of the coil. The method is illustrated for linear, quadratic and cubic magnetic target fields located asymmetrically within a finite-length coil.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.
Resumo:
For quantum systems with linear dynamics in phase space much of classical feedback control theory applies. However, there are some questions that are sensible only for the quantum case: Given a fixed interaction between the system and the environment what is the optimal measurement on the environment for a particular control problem? We show that for a broad class of optimal (state- based) control problems ( the stationary linear-quadratic-Gaussian class), this question is a semidefinite program. Moreover, the answer also applies to Markovian (current-based) feedback.
Resumo:
We present existence results for a Neumann problem involving critical Sobolev nonlinearities both on the right hand side of the equation and at the boundary condition.. Positive solutions are obtained through constrained minimization on the Nehari manifold. Our approach is based on the concentration 'compactness principle of P. L. Lions and M. Struwe.
Resumo:
In high-velocity open channel flows, the measurements of air-water flow properties are complicated by the strong interactions between the flow turbulence and the entrained air. In the present study, an advanced signal processing of traditional single- and dual-tip conductivity probe signals is developed to provide further details on the air-water turbulent level, time and length scales. The technique is applied to turbulent open channel flows on a stepped chute conducted in a large-size facility with flow Reynolds numbers ranging from 3.8 E+5 to 7.1 E+5. The air water flow properties presented some basic characteristics that were qualitatively and quantitatively similar to previous skimming flow studies. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level at a macroscopic scale, and the auto- and cross-correlation functions at the microscopic level. New correlation analyses yielded a characterisation of the large eddies advecting the bubbles. Basic results included the integral turbulent length and time scales. The turbulent length scales characterised some measure of the size of large vortical structures advecting air bubbles in the skimming flows, and the data were closely related to the characteristic air-water depth Y90. In the spray region, present results highlighted the existence of an upper spray region for C > 0.95 to 0.97 in which the distributions of droplet chord sizes and integral advection scales presented some marked differences with the rest of the flow.
Resumo:
Results of two experiments are reported that examined how people respond to rectangular targets of different sizes in simple hitting tasks. If a target moves in a straight line and a person is constrained to move along a linear track oriented perpendicular to the targetrsquos motion, then the length of the target along its direction of motion constrains the temporal accuracy and precision required to make the interception. The dimensions of the target perpendicular to its direction of motion place no constraints on performance in such a task. In contrast, if the person is not constrained to move along a straight track, the targetrsquos dimensions may constrain the spatial as well as the temporal accuracy and precision. The experiments reported here examined how people responded to targets of different vertical extent (height): the task was to strike targets that moved along a straight, horizontal path. In experiment 1 participants were constrained to move along a horizontal linear track to strike targets and so target height did not constrain performance. Target height, length and speed were co-varied. Movement time (MT) was unaffected by target height but was systematically affected by length (briefer movements to smaller targets) and speed (briefer movements to faster targets). Peak movement speed (Vmax) was influenced by all three independent variables: participants struck shorter, narrower and faster targets harder. In experiment 2, participants were constrained to move in a vertical plane normal to the targetrsquos direction of motion. In this task target height constrains the spatial accuracy required to contact the target. Three groups of eight participants struck targets of different height but of constant length and speed, hence constant temporal accuracy demand (different for each group, one group struck stationary targets = no temporal accuracy demand). On average, participants showed little or no systematic response to changes in spatial accuracy demand on any dependent measure (MT, Vmax, spatial variable error). The results are interpreted in relation to previous results on movements aimed at stationary targets in the absence of visual feedback.
Resumo:
Some motor tasks can be completed, quite literally, with our eyes shut. Most people can touch their nose without looking or reach for an object after only a brief glance at its location. This distinction leads to one of the defining questions of movement control: is information gleaned prior to starting the movement sufficient to complete the task (open loop), or is feedback about the progress of the movement required (closed loop)? One task that has commanded considerable interest in the literature over the years is that of steering a vehicle, in particular lane-correction and lane-changing tasks. Recent work has suggested that this type of task can proceed in a fundamentally open loop manner [1 and 2], with feedback mainly serving to correct minor, accumulating errors. This paper reevaluates the conclusions of these studies by conducting a new set of experiments in a driving simulator. We demonstrate that, in fact, drivers rely on regular visual feedback, even during the well-practiced steering task of lane changing. Without feedback, drivers fail to initiate the return phase of the maneuver, resulting in systematic errors in final heading. The results provide new insight into the control of vehicle heading, suggesting that drivers employ a simple policy of “turn and see,” with only limited understanding of the relationship between steering angle and vehicle heading.
Resumo:
We investigate the effect of the coefficient of the critical nonlinearity for the Neumann problem on the existence of least energy solutions. As a by-product we establish a Sobolev inequality with interior norm.
Resumo:
The received view of an ad hoc hypothesis is that it accounts for only the observation(s) it was designed to account for, and so non-adhocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be unsuccessful or of doubtful value, and familiar and firmer criteria for evaluating the hypotheses or modified theories so classified are characteristically available. These points are obscured largely because the received view fails to adequately separate psychology from methodology or to recognise ambiguities in the use of 'ad hoc'.