992 resultados para Decoupling Vector Field


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter, we elaborate on the well-known relationship between Gaussian processes (GP) and Support Vector Machines (SVM). Secondly, we present approximate solutions for two computational problems arising in GP and SVM. The first one is the calculation of the posterior mean for GP classifiers using a `naive' mean field approach. The second one is a leave-one-out estimator for the generalization error of SVM based on a linear response method. Simulation results on a benchmark dataset show similar performances for the GP mean field algorithm and the SVM algorithm. The approximate leave-one-out estimator is found to be in very good agreement with the exact leave-one-out error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: To evaluate the accuracy of an open-field autorefractor compared with subjective refraction in pseudophakes and hence its ability to assess objective eye focus with intraocular lenses (IOLs). Methods: Objective refraction was measured at 6 m using the Shin-Nippon NVision-K 5001/Grand Seiko WR-5100K open-field autorefractor (five repeats) and by subjective refraction on 141 eyes implanted with a spherical (Softec1 n=53), aspherical (SoftecHD n=37) or accommodating (1CU n=22; Tetraflex n=29) IOL. Autorefraction was repeated 2 months later. Results: The autorefractor prescription was similar (average difference: 0.09±0.53 D; p=0.19) to that found by subjective refraction, with ~71% within ±0.50 D. The horizontal cylindrical components were similar (difference: 0.00±0.39 D; p=0.96), although the oblique (J45) autorefractor cylindrical vector was slightly more negative (by -0.06±0.25 D; p=0.06) than the subjective refraction. The results were similar for each of the IOL designs except for the spherical IOL, where the mean spherical equivalent difference between autorefraction and subjective was more hypermetropic than the Tetraflex accommodating IOL (F=2.77, p=0.04). The intrasession repeatability was

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many problems in spatial statistics it is necessary to infer a global problem solution by combining local models. A principled approach to this problem is to develop a global probabilistic model for the relationships between local variables and to use this as the prior in a Bayesian inference procedure. We show how a Gaussian process with hyper-parameters estimated from Numerical Weather Prediction Models yields meteorologically convincing wind fields. We use neural networks to make local estimates of wind vector probabilities. The resulting inference problem cannot be solved analytically, but Markov Chain Monte Carlo methods allow us to retrieve accurate wind fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To validate a new miniaturised, open-field wavefront device which has been developed with the capacity to be attached to an ophthalmic surgical microscope or slit-lamp. SETTING: Solihull Hospital and Aston University, Birmingham, UK DESIGN: Comparative non-interventional study. METHODS: The dynamic range of the Aston Aberrometer was assessed using a calibrated model eye. The validity of the Aston Aberrometer was compared to a conventional desk mounted Shack-Hartmann aberrometer (Topcon KR1W) by measuring the refractive error and higher order aberrations of 75 dilated eyes with both instruments in random order. The Aston Aberrometer measurements were repeated five times to assess intra-session repeatability. Data was converted to vector form for analysis. RESULTS: The Aston Aberrometer had a large dynamic range of at least +21.0 D to -25.0 D. It gave similar measurements to a conventional aberrometer for mean spherical equivalent (mean difference ± 95% confidence interval: 0.02 ± 0.49D; correlation: r=0.995, p<0.001), astigmatic components (J0: 0.02 ± 0.15D; r=0.977, p<0.001; J45: 0.03 ± 0.28; r=0.666, p<0.001) and higher order aberrations RMS (0.02 ± 0.20D; r=0.620, p<0.001). Intraclass correlation coefficient assessments of intra-sessional repeatability for the Aston Aberrometer were excellent (spherical equivalent =1.000, p<0.001; astigmatic components J0 =0.998, p<0.001, J45=0.980, p<0.01; higher order aberrations RMS =0.961, p<0.001). CONCLUSIONS: The Aston Aberrometer gives valid and repeatable measures of refractive error and higher order aberrations over a large range. As it is able to measure continuously, it can provide direct feedback to surgeons during intraocular lens implantations and corneal surgery as to the optical status of the visual system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of efficient computing of the affine vector operations (addition of two vectors and multiplication of a vector by a scalar over GF (q)), and also the weight of a given vector, is important for many problems in coding theory, cryptography, VLSI technology etc. In this paper we propose a new way of representing vectors over GF (3) and GF (4) and we describe an efficient performance of these affine operations. Computing weights of binary vectors is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on a new vector model of an erbium-doped fibre laser mode locked with carbon nanotubes. This model goes beyond the limitations of the previously used models based on either coupled nonlinear Schrödinger or Ginzburg-Landau equations. Unlike the previous models, it accounts for the vector nature of the interaction between an optical field and an erbium-doped active medium, slow relaxation dynamics of erbium ions, linear birefringence in a fibre, linear and circular birefringence of a laser cavity caused by in-cavity polarization controller and light-induced anisotropy caused by elliptically polarized pump field. Interplay of aforementioned factors changes coherent coupling of two polarization modes at a long time scale and so results in a new family of vector solitons (VSs) with fast and slowly evolving states of polarization. The observed VSs can be of interest in secure communications, trapping and manipulation of atoms and nanoparticles, control of magnetization in data storage devices and many other areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recently reported Monte Carlo Random Path Sampling method (RPS) is here improved and its application is expanded to the study of the 2D and 3D Ising and discrete Heisenberg models. The methodology was implemented to allow use in both CPU-based high-performance computing infrastructures (C/MPI) and GPU-based (CUDA) parallel computation, with significant computational performance gains. Convergence is discussed, both in terms of free energy and magnetization dependence on field/temperature. From the calculated magnetization-energy joint density of states, fast calculations of field and temperature dependent thermodynamic properties are performed, including the effects of anisotropy on coercivity, and the magnetocaloric effect. The emergence of first-order magneto-volume transitions in the compressible Ising model is interpreted using the Landau theory of phase transitions. Using metallic Gadolinium as a real-world example, the possibility of using RPS as a tool for computational magnetic materials design is discussed. Experimental magnetic and structural properties of a Gadolinium single crystal are compared to RPS-based calculations using microscopic parameters obtained from Density Functional Theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider instabilities of localised solutions in planar neural field firing rate models of Wilson-Cowan or Amari type. Importantly we show that angular perturbations can destabilise spatially localised solutions. For a scalar model with Heaviside firing rate function we calculate symmetric one-bump and ring solutions explicitly and use an Evans function approach to predict the point of instability and the shapes of the dominant growing modes. Our predictions are shown to be in excellent agreement with direct numerical simulations. Moreover, beyond the instability our simulations demonstrate the emergence of multi-bump and labyrinthine patterns. With the addition of spike-frequency adaptation, numerical simulations of the resulting vector model show that it is possible for structures without rotational symmetry, and in particular multi-bumps, to undergo an instability to a rotating wave. We use a general argument, valid for smooth firing rate functions, to establish the conditions necessary to generate such a rotational instability. Numerical continuation of the rotating wave is used to quantify the emergent angular velocity as a bifurcation parameter is varied. Wave stability is found via the numerical evaluation of an associated eigenvalue problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce quantum sensing schemes for measuring very weak forces with a single trapped ion. They use the spin-motional coupling induced by the laser-ion interaction to transfer the relevant force information to the spin-degree of freedom. Therefore, the force estimation is carried out simply by observing the Ramsey-type oscillations of the ion spin states. Three quantum probes are considered, which are represented by systems obeying the Jaynes-Cummings, quantum Rabi (in 1D) and Jahn-Teller (in 2D) models. By using dynamical decoupling schemes in the Jaynes-Cummings and Jahn-Teller models, our force sensing protocols can be made robust to the spin dephasing caused by the thermal and magnetic field fluctuations. In the quantum-Rabi probe, the residual spin-phonon coupling vanishes, which makes this sensing protocol naturally robust to thermally-induced spin dephasing. We show that the proposed techniques can be used to sense the axial and transverse components of the force with a sensitivity beyond the yN/\wurzel{Hz}range, i.e. in the xN/\wurzel{Hz}(xennonewton, 10^−27). The Jahn-Teller protocol, in particular, can be used to implement a two-channel vector spectrum analyzer for measuring ultra-low voltages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radio relics are one of the different types of diffuse radio sources present in a fraction of galaxy clusters. They are characterized by elongated arc-like shapes, with sizes that range between 0.5 and 2 Mpc, and highly polarized emission (up to ∼60%) at GHz frequencies The linearly polarized radiation of relics, moving through a magnetized plasma which is the ICM, is affected by the rotation of the linear polarization vector. This effect, known as “Faraday rotation”, can cause depolarization. The study of this effect allows us to constrain the magnetic field projected along the line of sight. The aim of this thesis work is to constrain the magnetic field intensity and distribution in the periphery of the cluster PSZ2 G096.88+24.18: this cluster hosts a pair of radio relics that can be used for polarization analysis. To analyse the polarization properties of the relics in PSZ2 G096.88+24.18 radio relics we used new Jansky Very Large Array (VLA) observations together with archival observations. The polarization study has been performed using the Rotation Measure Synthesis technique, which allows us to recover polarization, minimizing the bandwidth depolarization. Thanks to this technique, we recovered more polarization from the southern relic (with respect to provious works), We studied also the depolarization trend with the resolution for the southern relic, and found that the polarization fraction decreases with the beamsize. Finally, we have produced simulated magnetic fields models, varying the auto-correlation lengths of the magnetic field, in order to reproduce the observed depolarization trend in the southern relic. Comparing our observational results and model predictions, we were able to constrain the scales over which the turbulent magnetic field varies within the cluster. We conclude that the depolarization observed in the southern relic is likely due to external depolarization caused by the magnetized ICM distribution within the cluster.