15 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research has suggested that understanding in well-structured settings often does not transfer to the everyday, less-structured problems encountered outside of school. Little is known, beyond anecdotal evidence, about how teachers' consideration of distributions as evidence in well-structured settings compares with their use in ill-structured problem contexts. A qualitative study of preservice secondary teachers examined their use of distributions as evidence in four tasks of varying complexity and ill-structuredness. Results suggest that teachers' incorporation of distributions in well-structured settings does not imply that they will be incorporated in less structured problems (and vice-versa). Implications for research and teaching are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We studied habitat selection and breeding success in marked populations of a protected seabird (family Alcidae), the marbled murrelet (Brachyramphus marmoratus), in a relatively intact and a heavily logged old-growth forest landscape in south-western Canada. Murrelets used old-growth fragments either proportionately to their size frequency distribution (intact) or they tended to nest in disproportionately smaller fragments (logged). Multiple regression modelling showed that murrelet distribution could be explained by proximity of nests to landscape features producing biotic and abiotic edge effects. Streams, steeper slopes and lower elevations were selected in both landscapes, probably due to good nesting habitat conditions and easier access to nest sites. In the logged landscape, the murrelets nested closer to recent clearcuts than would be expected. Proximity to the ocean was favoured in the intact area. The models of habitat selection had satisfactory discriminatory ability in both landscapes. Breeding success (probability of nest survival to the middle of the chick rearing period), inferred from nest attendance patterns by radio-tagged parents, was modelled in the logged landscape. Survivorship was greater in areas with recent clearcuts and lower in areas with much regrowth, i.e. it was positively correlated with recent habitat fragmentation. We conclude that marbled murrelets can successfully breed in old-growth forests fragmented by logging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a buying-selling problem when two stops of a sequence of independent random variables are required. An optimal stopping rule and the value of a game are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Historians of genetics agree that multiple conceptions of the gene have coexisted at each stages in the history of genetics and that the resulting partial ambiguity has often contributed to the success of genetics, both because workers in different areas have needed to communicate and to draw on one another’s results despite wrestled with very different scientific challenges, and because empirical findings have often challenged the presuppositions of existing conceptions of the gene. Today, a number of different conceptions of the gene coexist in the biosciences. An ‘instrumental’ gene similar to that of classical genetics retains a critical role in the construction and interpretation of experiments in which the relationship between genotype and phenotype is explored via hybridization between organisms or directly between nucleic acid molecules. It also plays an important theoretical role in the foundations of disciplines such as quantitative genetics and population genetics. A ‘nominal’ gene, defined by the practice of genetic nomenclature, is a critical practical tool and allows communication between bioscientists in a wide range of fields to be grounded in welldefined sequences of nucleotides. This concept, however, does not embody major theoretical insights into genome structure or function. Instead, a ‘post-genomic’ conception of the gene embodies the continuing project of understanding how genome structure supports genome function, but with a deflationary picture of the gene as a structural unit. This final concept of the gene poses a significant challenge to earlier assumptions about the relationship between genome structure and function, and between genotype and phenotype.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Bush administration's continuing emphasis on US military deterrence of the PRC on behalf of Taiwan threatens to undermine the posture of 'strategic ambiguity' that the United States has proclaimed since 1979. This article argues for the retention of 'strategic ambiguity' and traces the origins of revisionist sentiment towards this effective conflict avoidance mechanism to reactions within the US foreign policy community to the 1995-96 Taiwan Strait crisis. Case studies of this crisis and its predecessors in 1954-55 and 1958 demonstrate that US military deterrence was not a decisive factor in their resolution. US and PRC initiatives and responses in the 1950s crises introduced the essential elements of 'strategic ambiguity' into the triangular relationship between themselves and Taiwan. In particular, they established a precedent for the United States and the PRC in circumscribing the issue of Taiwan so as to achieve a political accommodation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To describe a series of patients with clinically significant lead poisoning. Methodology: A case series of nine patients with lead poisoning who required inpatient management, identified through a Clinical Toxicology Service. Results: Nine children presented with clinically significant lead poisoning. The median serum lead was 2.5 mumol/L (range 1.38-4.83). Eight of the children were exposed to lead-based paint, with seven due to dust from sanded lead paint during house renovations. Serial blood determinations suggested re-exposure in four of the patients, and in one of these patients the re-exposure was from a different source of lead. Eight of the patients required chelation therapy. Conclusions: Serious lead poisoning continues to occur and there appears to be complacency regarding the hazard posed by lead paint in old houses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radio-frequency ( RF) coils are designed such that they induce homogeneous magnetic fields within some region of interest within a magnetic resonance imaging ( MRI) scanner. Loading the scanner with a patient disrupts the homogeneity of these fields and can lead to a considerable degradation of the quality of the acquired image. In this paper, an inverse method is presented for designing RF coils, in which the presence of a load ( patient) within the MRI scanner is accounted for in the model. To approximate the finite length of the coil, a Fourier series expansion is considered for the coil current density and for the induced fields. Regularization is used to solve this ill-conditioned inverse problem for the unknown Fourier coefficients. That is, the error between the induced and homogeneous target fields is minimized along with an additional constraint, chosen in this paper to represent the curvature of the coil windings. Smooth winding patterns are obtained for both unloaded and loaded coils. RF fields with a high level of homogeneity are obtained in the unloaded case and a limit to the level of homogeneity attainable is observed in the loaded case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualisation of multiple isoforms of kappa-casein on 2-D gels is restricted by the abundant alpha- and beta-caseins that not only limit gel loading but also migrate to similar regions as the more acidic kappa-casein isoforms. To overcome this problem, we took advantage of the absence of cysteine residues in alpha(S1)- and beta-casein by devising an affinity enrichment procedure based on reversible biotinylation of cysteine residues. Affinity capture of cysteine-containing proteins on avidin allowed the removal of the vast majority of alpha(S1)- and beta-casein, and on subsequent 2-D gel analysis 16 gel spots were identified as kappa-casein by PMF. Further analysis of the C-terminal tryptic peptide along with structural predictions based on mobility on the 2-D gel allowed us to assign identities to each spot in terms of genetic variant (A or B), phosphorylation status (1, 2 or 3) and glycosylation status (from 0 to 6). Eight isoforms of the A and B variants with the same PTMs were observed. When the casein fraction of milk from a single cow, homozygous for the B variant of kappa-casein, was used as the starting material, 17 isoforms from 13 gel spots were characterised. Analysis of isoforms of low abundance proved challenging due to the low amount of material that could be extracted from the gels as well as the lability of the PTMs during MS analysis. However, we were able to identify a previously unrecognised site, T-166, that could be phosphorylated or glycosylated. Despite many decades of analysis of milk proteins, the reasons for this high level of heterogeneity are still not clear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone health. The objective of this work was to determine if the structural attributes of savanna riparian zones in northern Australia can be detected from commercially available remotely sensed image data. Two QuickBird images and coincident field data covering sections of the Daly River and the South Alligator River - Barramundie Creek in the Northern Territory were used. Semi-variograms were calculated to determine the characteristic spatial scales of riparian zone features, both vegetative and landform. Interpretation of semi-variograms showed that structural dimensions of riparian environments could be detected and estimated from the QuickBird image data. The results also show that selecting the correct spatial resolution and spectral bands is essential to maximize the accuracy of mapping spatial characteristics of savanna riparian features. The distribution of foliage projective cover of riparian vegetation affected spectral reflectance variations in individual spectral bands differently. Pan-sharpened image data enabled small-scale information extraction (< 6 m) on riparian zone structural parameters. The semi-variogram analysis results provide the basis for an inversion approach using high spatial resolution satellite image data to map indicators of savanna riparian zone health.