920 resultados para field method
Resumo:
An object-oriented finite-difference time-domain (FDTD) simulator has been developed for electromagnetic study and design applications in Magnetic Resonance Imaging. It is aimed to be a complete FDTD model of an MRI system including all high and low-frequency field generating units and electrical models of the patient. The design method is described and MRI-based numerical examples are presented to illustrate the function of the numerical solver, particular emphasis is placed on high field studies.
Resumo:
Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectraldomain/swept-source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept-source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional morphological layout of the sample that can be reconstructed in software via three-dimensional discrete Fourier-transform. This method of recording of the OCT signal confers signal-to-noise ratio improvement in comparison with "flying-spot" time-domain OCT. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present theoretical and experimental study of imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.
Resumo:
Finite element analysis (FEA) of nonlinear problems in solid mechanics is a time consuming process, but it can deal rigorously with the problems of both geometric, contact and material nonlinearity that occur in roll forming. The simulation time limits the application of nonlinear FEA to these problems in industrial practice, so that most applications of nonlinear FEA are in theoretical studies and engineering consulting or troubleshooting. Instead, quick methods based on a global assumption of the deformed shape have been used by the roll-forming industry. These approaches are of limited accuracy. This paper proposes a new form-finding method - a relaxation method to solve the nonlinear problem of predicting the deformed shape due to plastic deformation in roll forming. This method involves applying a small perturbation to each discrete node in order to update the local displacement field, while minimizing plastic work. This is iteratively applied to update the positions of all nodes. As the method assumes a local displacement field, the strain and stress components at each node are calculated explicitly. Continued perturbation of nodes leads to optimisation of the displacement field. Another important feature of this paper is a new approach to consideration of strain history. For a stable and continuous process such as rolling and roll forming, the strain history of a point is represented spatially by the states at a row of nodes leading in the direction of rolling to the current one. Therefore the increment of the strain components and the work-increment of a point can be found without moving the object forward. Using this method we can find the solution for rolling or roll forming in just one step. This method is expected to be faster than commercial finite element packages by eliminating repeated solution of large sets of simultaneous equations and the need to update boundary conditions that represent the rolls.
Resumo:
Eddy currents induced within a magnetic resonance imaging (MRI) cryostat bore during pulsing of gradient coils can be applied constructively together with the gradient currents that generate them, to obtain good quality gradient uniformities within a specified imaging volume over time. This can be achieved by simultaneously optimizing the spatial distribution and temporal pre-emphasis of the gradient coil current, to account for the spatial and temporal variation of the secondary magnetic fields due to the induced eddy currents. This method allows the tailored design of gradient coil/magnet configurations and consequent engineering trade-offs. To compute the transient eddy currents within a realistic cryostat vessel, a low-frequency finite-difference time-domain (FDTD) method using total-field scattered-field (TFSF) scheme has been performed and validated
Resumo:
In this thesis we study at perturbative level correlation functions of Wilson loops (and local operators) and their relations to localization, integrability and other quantities of interest as the cusp anomalous dimension and the Bremsstrahlung function. First of all we consider a general class of 1/8 BPS Wilson loops and chiral primaries in N=4 Super Yang-Mills theory. We perform explicit two-loop computations, for some particular but still rather general configuration, that confirm the elegant results expected from localization procedure. We find notably full consistency with the multi-matrix model averages, obtained from 2D Yang-Mills theory on the sphere, when interacting diagrams do not cancel and contribute non-trivially to the final answer. We also discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. Also these observables localize on a two-dimensional gauge theory on S^2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Luscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in N=4 super Yang-Mills theory. Finally we study the cusp anomalous dimension in N=6 ABJ(M) theory, identifying a scaling limit in which the ladder diagrams dominate. The resummation is encoded into a Bethe-Salpeter equation that is mapped to a Schroedinger problem, exactly solvable due to the surprising supersymmetry of the effective Hamiltonian. In the ABJ case the solution implies the diagonalization of the U(N) and U(M) building blocks, suggesting the existence of two independent cusp anomalous dimensions and an unexpected exponentation structure for the related Wilson loops.
Resumo:
In this chapter, we elaborate on the well-known relationship between Gaussian processes (GP) and Support Vector Machines (SVM). Secondly, we present approximate solutions for two computational problems arising in GP and SVM. The first one is the calculation of the posterior mean for GP classifiers using a `naive' mean field approach. The second one is a leave-one-out estimator for the generalization error of SVM based on a linear response method. Simulation results on a benchmark dataset show similar performances for the GP mean field algorithm and the SVM algorithm. The approximate leave-one-out estimator is found to be in very good agreement with the exact leave-one-out error.
Resumo:
We study online approximations to Gaussian process models for spatially distributed systems. We apply our method to the prediction of wind fields over the ocean surface from scatterometer data. Our approach combines a sequential update of a Gaussian approximation to the posterior with a sparse representation that allows to treat problems with a large number of observations.
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
In developed countries travel time savings can account for as much as 80% of the overall benefits arising from transport infrastructure and service improvements. In developing countries they are generally ignored in transport project appraisals, notwithstanding their importance. One of the reasons for ignoring these benefits in the developing countries is that there is insufficient empirical evidence to support the conventional models for valuing travel time where work patterns, particularly of the poor, are diverse and it is difficult to distinguish between work and non-work activities. The exclusion of time saving benefits may lead to a bias against investment decisions that benefit the poor and understate the poverty reduction potential of transport investments in Least Developed Countries (LDCs). This is because the poor undertake most travel and transport by walking and headloading on local roads, tracks and paths and improvements of local infrastructure and services bring large time saving benefits for them through modal shifts. The paper reports on an empirical study to develop a methodology for valuing rural travel time savings in the LDCs. Apart from identifying the theoretical and empirical issues in valuing travel time savings in the LDCs, the paper presents and discusses the results of an analysis of data from Bangladesh. Some of the study findings challenge the conventional wisdom concerning the time saving values. The Bangladesh study suggests that the western concept of dividing travel time savings into working and non-working time savings is broadly valid in the developing country context. The study validates the use of preference methods in valuing non-working time saving values. However, stated preference (SP) method is more appropriate than revealed preference (RP) method.
Resumo:
It was decided to investigate field emission from cadmium sulphide because many workers have found that the agreement between theory and experiment for this material, and other semiconductors, is poor. An electron energy analyser, similar to those used in most of the previously reported experiments, was, therefore, built. The performance of the analyser was thoroughly investigated both theoretically and practically and the results of these investigations were used in conjunction with a tungsten emitter. Excellent agreement was obtained between the usually accepted total energy distribution for tungsten and the corresponding .distribution measured with the present analyser. A method of obtaining reliable cadmium sulphide emitter was developed. These emitters were then used in the analyser and it was found that the agreement between theory and experiment was poor. Previous explanations of the lack of agreement are considered and are found to be doubtful. The theory of field emission from semiconductors is reviewed and possible reasons for the discrepancy between theory and experiment are proposed. Finally, further experiments are described which should prove or disprove the conclusions arrived at in this work.
Resumo:
We have investigated the effect of ageing on the visual system using the relatively new technique of magentoencephalography (MEG). This technique measures the magnetic signals produced by the visual system using a SQUID magnetometer. The magnetic visual evoked field (VEF) was measured over the occipital cortex to pattern and flash stimuli in 86 normal subjects aged 15 - 86 years. Factors that influenced subject defocussing or defixating the stimulus or selective attention were controlled as far as possible. The latency of the major positive component to the pattern reversal stimulus (P100M) increased with age particularly after the age of 55 years while the amplitude of the P100M decreased over the life span. The latency of the major flash component (P2M) increased much more slowly with age, while its amplitude decreased in only a proportion of elderly subjects. Changes in the P100M with age may reflect senile changes in the eye and optic nerve, e.g. senile miosis or degenerative changes in the retina. The P2M may be more susceptible to senile changes in the retina. The data suggest that the spatial frequency channels deteriorate more rapidly with age than the luminance channels and that MEG may be an effective method of studying ageing in the visual system.
Resumo:
Purpose: To investigate the correlation between tests of visual function and perceived visual ability recorded with a 'quality-of-life' questionnaire for patients with central field loss. Method: 12 females and 7 males (mean age = 53.1 years; Range = 23 - 80 years) with subfoveal neovascular membranes underwent a comprehensive assessment of visual function. Tests included unaided distance vision, high and low contrast distance logMAR visual acuity (VA), Pelli-Robson contrast senstivity (at 1m), near logMAR word VA and text reading speed. All tests were done both monocularly and binocularly. The patients also completed a 28 point questionnaire separated into a 'core' section consisting of general questions about perceived visual function and a 'module' section with specific questions on reading function. Results: Step-wise multiple regression analysis was used to determine which visual function tests were correlated with the patients's perceived visual function and to rank them in order of importance. The visual function test that explains most of the variance in both 'core' score (66%0 and the 'module' score (68%) of the questionnaire is low contrast VA in the better eye (P<0.001 in both cases). Further, the module score also accounts for a significant proportion of the variance (P<0.01) of the distance logMAR VA in both the better and worse eye, and the near logMAR in both the better eye and binocularly. Conclusions: The best predictor of both perceived reading ability and of general perceived visual ability in this study is low contrast logMAR VA. The results highlight that distance VA is not the only relevant measure of visual fucntion in relation to a patients's perceived visual performance and should not be considered a determinant of surgical or management success.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
The present dissertation is concerned with the determination of the magnetic field distribution in ma[.rnetic electron lenses by means of the finite element method. In the differential form of this method a Poisson type equation is solved by numerical methods over a finite boundary. Previous methods of adapting this procedure to the requirements of digital computers have restricted its use to computers of extremely large core size. It is shown that by reformulating the boundary conditions, a considerable reduction in core store can be achieved for a given accuracy of field distribution. The magnetic field distribution of a lens may also be calculated by the integral form of the finite element rnethod. This eliminates boundary problems mentioned but introduces other difficulties. After a careful analysis of both methods it has proved possible to combine the advantages of both in a .new approach to the problem which may be called the 'differential-integral' finite element method. The application of this method to the determination of the magnetic field distribution of some new types of magnetic lenses is described. In the course of the work considerable re-programming of standard programs was necessary in order to reduce the core store requirements to a minimum.