943 resultados para fixed point method
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
A comprehensive study has been conducted to compare the adsorptions of alkali metals (including Li, Na, and K) on the basal plane of graphite by using molecular orbital theory calculations. All three metal atoms prefer to be adsorbed on the middle hollow site above a hexagonal aromatic ring. A novel phenomenon was observed, that is, Na, instead of Li or K, is the weakest among the three types of metal atoms in adsorption. The reason is that the SOMO (single occupied molecular orbital) of the Na atom is exactly at the middle point between the HOMO and the LUMO of the graphite layer in energy level. As a result, the SOMO of Na cannot form a stable interaction with either the HOMO or the LUMO of the graphite. On the other hand, the SOMO of Li and K can form a relatively stable interaction with either the HOMO or the LUMO of graphite. Why Li has a relatively stronger adsorption than K on graphite has also been interpreted on the basis of their molecular-orbital energy levels.
Resumo:
Detection of point mutations or single nucleotide polymorphisms (SNPs) is important in relation to disease susceptibility or detection in pathogens of mutations determining drug resistance or host range. There is an emergent need for rapid detection methods amenable to point-of-care applications. The purpose of this study was to reduce to practice a novel method for SNP detection and to demonstrate that this technology can be used downstream of nucleic acid amplification. The authors used a model system to develop an oligonucleotide-based SNP detection system on nitrocellulose lateral flow strips. To optimize the assay they used cloned sequences of the herpes simplex virus-1 (HSV-1) DNA polymerase gene into which they introduced a point mutation. The assay system uses chimeric polymerase chain reaction (PCR) primers that incorporate hexameric repeat tags ("hexapet tags"). The chimeric sequences allow capture of amplified products to predefined positions on a lateral flow strip. These "hexapet" sequences have minimal cross-reactivity and allow specific hybridization-based capture of the PCR products at room temperature onto lateral flow strips that have been striped with complementary hexapet tags. The allele-specific amplification was carried out with both mutant and wild-type primer sets present in the PCR mix ("competitive" format). The resulting PCR products carried a hexapet tag that corresponded with either a wild-type or mutant sequence. The lateral flow strips are dropped into the PCR reaction tube, and mutant sequence and wild-type sequences diffuse along the strip and are captured at the corresponding position on the strip. A red line indicative of a positive reaction is visible after 1 minute. Unlike other systems that require separate reactions and strips for each target sequence, this system allows multiplex PCR reactions and multiplex detection on a single strip or other suitable substrates. Unambiguous visual discrimination of a point mutation under room temperature hybridization conditions was achieved with this model system in 10 minutes after PCR. The authors have developed a capture-based hybridization method for the detection and discrimination of HSV-1 DNA polymerase genes that contain a single nucleotide change. It has been demonstrated that the hexapet oligonucleotides can be adapted for hybridization on the lateral flow strip platform for discrimination of SNPs. This is the first step in demonstrating SNP detection on lateral flow using the hexapet oligonucleotide capture system. It is anticipated that this novel system can be widely used in point-of-care settings.
Resumo:
This paper introduces a method for power system modeling during the earth fault. The possibility of using this method for selection and adjustment of earth fault protection is pointed out. The paper also contains the comparison of results achieved by simulation with the experimental measurements.
Resumo:
Chromogenic (CISH) and fluorescent ( FISH) in situ hybridization have emerged as reliable techniques to identify amplifications and chromosomal translocations. CISH provides a spatial distribution of gene copy number changes in tumour tissue and allows a direct correlation between copy number changes and the morphological features of neoplastic cells. However, the limited number of commercially available gene probes has hindered the use of this technique. We have devised a protocol to generate probes for CISH that can be applied to formalin-fixed, paraffin-embedded tissue sections (FFPETS). Bacterial artificial chromosomes ( BACs) containing fragments of human DNA which map to specific genomic regions of interest are amplified with phi 29 polymerase and random primer labelled with biotin. The genomic location of these can be readily confirmed by BAC end pair sequencing and FISH mapping on normal lymphocyte metaphase spreads. To demonstrate the reliability of the probes generated with this protocol, four strategies were employed: (i) probes mapping to cyclin D1 (CCND1) were generated and their performance was compared with that of a commercially available probe for the same gene in a series of 10 FFPETS of breast cancer samples of which five harboured CCND1 amplification; (ii) probes targeting cyclin-dependent kinase 4 were used to validate an amplification identified by microarray-based comparative genomic hybridization (aCGH) in a pleomorphic adenoma; (iii) probes targeting fibroblast growth factor receptor 1 and CCND1 were used to validate amplifications mapping to these regions, as defined by aCGH, in an invasive lobular breast carcinoma with FISH and CISH; and (iv) gene-specific probes for ETV6 and NTRK3 were used to demonstrate the presence of t(12; 15)(p12; q25) translocation in a case of breast secretory carcinoma with dual colour FISH. In summary, this protocol enables the generation of probes mapping to any gene of interest that can be applied to FFPETS, allowing correlation of morphological features with gene copy number.
Resumo:
Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.
Resumo:
Repeated titrations of strains of Newcastle disease virus (NDV) are more conveniently undertaken in cell cultures rather than in embryonated eggs. This is relatively easy with mesogenic and velogenic strains that are cytopathic to various cell lines, but is difficult with avirulent Australian isolates that are poorly cytopathic. Strain V4 for example has been shown to be pathogenic iin vitro only to of chicken embryo liver cells. Strain 1-2 was reported to produce cytopathic effect (CPE) on chicken embryo kidney (CEK) cells. The present studies confirmed this observation and developed a quantal assay. CEK cells infected with strain 1-2 developed CPE characterized by degeneration, rounding, granularity and vacuolation, and the formation of synctia. End points were readily established by microscopic examination of fixed and stained cells. In virus infectivity studies on strain 1-2, where multiple titrations are required and where large numbers of samples are used, titration using CEK cell grown in microtitre plates is recommended. Such studies may not be feasible in embryonated eggs.
Resumo:
BACKGROUND: Intervention time series analysis (ITSA) is an important method for analysing the effect of sudden events on time series data. ITSA methods are quasi-experimental in nature and the validity of modelling with these methods depends upon assumptions about the timing of the intervention and the response of the process to it. METHOD: This paper describes how to apply ITSA to analyse the impact of unplanned events on time series when the timing of the event is not accurately known, and so the problems of ITSA methods are magnified by uncertainty in the point of onset of the unplanned intervention. RESULTS: The methods are illustrated using the example of the Australian Heroin Shortage of 2001, which provided an opportunity to study the health and social consequences of an abrupt change in heroin availability in an environment of widespread harm reduction measures. CONCLUSION: Application of these methods enables valuable insights about the consequences of unplanned and poorly identified interventions while minimising the risk of spurious results.
Resumo:
This article first summarizes some available experimental results on the frictional behaviour of contact interfaces, and briefly recalls typical frictional experiments and relationships, which are applicable for rock mechanics, and then a unified description is obtained to describe the entire frictional behaviour. It is formulated based on the experimental results and applied with a stick and slip decomposition algorithm to describe the stick-slip instability phenomena, which can describe the effects observed in rock experiments without using the so-called state variable, thus avoiding related numerical difficulties. This has been implemented to our finite element code, which uses the node-to-point contact element strategy proposed by the authors to handle the frictional contact between multiple finite-deformation bodies with stick and finite frictional slip, and applied here to simulate the frictional behaviour of rocks to show its usefulness and efficiency.
Resumo:
In this paper, we examine the problem of fitting a hypersphere to a set of noisy measurements of points on its surface. Our work generalises an estimator of Delogne (Proc. IMEKO-Symp. Microwave Measurements 1972,117-123) which he proposed for circles and which has been shown by Kasa (IEEE Trans. Instrum. Meas. 25, 1976, 8-14) to be convenient for its ease of analysis and computation. We also generalise Chan's 'circular functional relationship' to describe the distribution of points. We derive the Cramer-Rao lower bound (CRLB) under this model and we derive approximations for the mean and variance for fixed sample sizes when the noise variance is small. We perform a statistical analysis of the estimate of the hypersphere's centre. We examine the existence of the mean and variance of the estimator for fixed sample sizes. We find that the mean exists when the number of sample points is greater than M + 1, where M is the dimension of the hypersphere. The variance exists when the number of sample points is greater than M + 2. We find that the bias approaches zero as the noise variance diminishes and that the variance approaches the CRLB. We provide simulation results to support our findings.
Resumo:
Summary form only given. The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent nondeterminism and a number of specific concurrency problems such as interference and deadlock. In previous work, we proposed a method for verifying concurrent Java components based on a mix of code inspection, static analysis tools, and the ConAn testing tool. The method was derived from an analysis of concurrency failures in Java components, but was not applied in practice. In this paper, we explore the method by applying it to an implementation of the well-known readers-writers problem and a number of mutants of that implementation. We only apply it to a single, well-known example, and so we do not attempt to draw any general conclusions about the applicability or effectiveness of the method. However, the exploration does point out several strengths and weaknesses in the method, which enable us to fine-tune the method before we carry out a more formal evaluation on other, more realistic components.
Resumo:
Finite element analysis (FEA) of nonlinear problems in solid mechanics is a time consuming process, but it can deal rigorously with the problems of both geometric, contact and material nonlinearity that occur in roll forming. The simulation time limits the application of nonlinear FEA to these problems in industrial practice, so that most applications of nonlinear FEA are in theoretical studies and engineering consulting or troubleshooting. Instead, quick methods based on a global assumption of the deformed shape have been used by the roll-forming industry. These approaches are of limited accuracy. This paper proposes a new form-finding method - a relaxation method to solve the nonlinear problem of predicting the deformed shape due to plastic deformation in roll forming. This method involves applying a small perturbation to each discrete node in order to update the local displacement field, while minimizing plastic work. This is iteratively applied to update the positions of all nodes. As the method assumes a local displacement field, the strain and stress components at each node are calculated explicitly. Continued perturbation of nodes leads to optimisation of the displacement field. Another important feature of this paper is a new approach to consideration of strain history. For a stable and continuous process such as rolling and roll forming, the strain history of a point is represented spatially by the states at a row of nodes leading in the direction of rolling to the current one. Therefore the increment of the strain components and the work-increment of a point can be found without moving the object forward. Using this method we can find the solution for rolling or roll forming in just one step. This method is expected to be faster than commercial finite element packages by eliminating repeated solution of large sets of simultaneous equations and the need to update boundary conditions that represent the rolls.