878 resultados para Topographic correction
Resumo:
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.
Resumo:
The paper is devoted to the description of hybrid pattern recognition method developed by research groups from Russia, Armenia and Spain. The method is based upon logical correction over the set of conventional neural networks. Output matrices of neural networks are processed according to the potentiality principle which allows increasing of recognition reliability.
Resumo:
Malapropism is a semantic error that is hardly detectable because it usually retains syntactical links between words in the sentence but replaces one content word by a similar word with quite different meaning. A method of automatic detection of malapropisms is described, based on Web statistics and a specially defined Semantic Compatibility Index (SCI). For correction of the detected errors, special dictionaries and heuristic rules are proposed, which retains only a few highly SCI-ranked correction candidates for the user’s selection. Experiments on Web-assisted detection and correction of Russian malapropisms are reported, demonstrating efficacy of the described method.
Resumo:
* Work done under partial support of Mexican Government (CONACyT, SNI), IPN (CGPI, COFAA) and Korean Government (KIPA Professorship for Visiting Faculty Positions). The second author is currently on Sabbatical leave at Chung-Ang University.
Resumo:
Presbyopia is a consequence of ageing and is therefore increasing inprevalence due to an increase in the ageing population. Of the many methods available to manage presbyopia, the use of contact lenses is indeed a tried and tested reversible option for those wishing to be spectacle free. Contact lens options to correct presbyopia include multifocal contact lenses and monovision.Several options have been available for many years with available guides to help choose multifocal contact lenses. However there is no comprehensive way to help the practitioner selecting the best option for an individual. An examination of the simplest way of predicting the most suitable multifocal lens for a patient will only enhance and add to the current evidence available. The purpose of the study was to determine the current use of presbyopic correction modalities in an optometric practice population in the UK and to evaluate and compare the optical performance of four silicone hydrogel soft multifocal contact lenses and to compare multifocal performance with contact lens monovision. The presbyopic practice cohort principal forms of refractive correction were distance spectacles (with near and intermediate vision providedby a variety of other forms of correction), varifocal spectacles and unaided distance with reading spectacles, with few patients wearing contact lenses as their primary correction modality. The results of the multifocal contact lens randomised controlled trial showed that there were only minor differences in corneal physiology between the lens options. Visual acuity differences were observed for distance targets, but only for low contrast letters and under mesopic lighting conditions. At closer distances between 20cm and 67cm, the defocus curves demonstrated that there were significant differences in acuity between lens designs (p < 0.001) and there was an interaction between the lens design and the level of defocus (p < 0.001). None of the lenses showed a clear near addition, perhaps due to their more aspheric rather than zoned design. As expected, stereoacuity was reduced with monovision compared with the multifocal contact lens designs, although there were some differences between the multifocal lens designs (p < 0.05). Reading speed did not differ between lens designs (F = 1.082, p = 0.368), whereas there was a significant difference in critical print size (F = 7.543, p < 0.001). Glare was quantified with a novel halometer and halo size was found to significantly differ between lenses(F = 4.101, p = 0.004). The rating of iPhone image clarity was significantly different between presbyopic corrections (p = 0.002) as was the Near Acuity Visual Questionnaire (NAVQ) rating of near performance (F = 3.730, p = 0.007).The pupil size did not alter with contact lens design (F = 1.614, p = 0.175), but was larger in the dominant eye (F = 5.489, p = 0.025). Pupil decentration relative to the optical axis did not alter with contact lens design (F = 0.777, p =0.542), but was also greater in the dominant eye (F = 9.917, p = 0.003). It was interesting to note that there was no difference in spherical aberrations induced between the contact lens designs (p > 0.05), with eye dominance (p > 0.05) oroptical component (ocular, corneal or internal: p > 0.05). In terms of subjective patient lens preference, 10 patients preferred monovision,12 Biofinity multifocal lens, 7 Purevision 2 for Presbyopia, 4 AirOptix multifocal and 2 Oasys multifocal contact lenses. However, there were no differences in demographic factors relating to lifestyle or personality, or physiological characteristics such as pupil size or ocular aberrations as measured at baseline,which would allow a practitioner to identify which lens modality the patient would prefer. In terms of the performance of patients with their preferred lens, it emerged that Biofinity multifocal lens preferring patients had a better high contrast acuity under photopic conditions, maintained their reading speed at smaller print sizes and subjectively rated iPhone clarity as better with this lens compared with the other lens designs trialled. Patients who preferred monovision had a lower acuity across a range of distances and a larger area of glare than those patients preferring other lens designs that was unexplained by the clinical metrics measured. However, it seemed that a complex interaction of aberrations may drive lens preference. New clinical tests or more diverse lens designs which may allow practitioners to prescribe patients the presbyopic contact lens option that will work best for them first time remains a hope for the future.
Resumo:
Identification of humans via ECG is being increasingly studied because it can have several advantages over the traditional biometric identification techniques. However, difficulties arise because of the heartrate variability. In this study we analysed the influence of QT interval correction on the performance of an identification system based on temporal and amplitude features of ECG. In particular we tested MLP, Naive Bayes and 3-NN classifiers on the Fantasia database. Results indicate that QT correction can significantly improve the overall system performance. © 2013 IEEE.
Resumo:
This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.
Resumo:
Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
Resumo:
PURPOSE: To determine the utility of a range of clinical and non-clinical indicators to aid the initial selection of the optimum presbyopic contact lens. In addition, to assess whether lens preference was influenced by the visual performance compared to the other designs trialled (intra-subject) or compared to participants who preferred other designs (inter-subject). METHODS: A double-masked randomised crossover trial of Air Optix Aqua multifocal, PureVision 2 for Presbyopia, Acuvue OASYS for Presbyopia, Biofinity multifocal and monovision was conducted on 35 presbyopes (54.3±6.2years). Participant lifestyle, personality, pupil characteristics and aberrometry were assessed prior to lens fitting. After 4 weeks of wear, high and low contrast visual acuity (VA) under photopic and mesopic conditions, reading speed, Near Activity Visual Questionnaire (NAVQ) rating, subjective quality-of-vision scoring, defocus curves, stereopsis, halometry, aberrometry and ocular physiology were quantified. RESULTS: After trialling all the lenses, preference was mixed (n=12 Biofinity, n=10 monovision, n=7 Purevision, n=4 Air Optix Aqua, n=2 Oasys). Lens preference was not dependent on personality (F=1.182, p=0.323) or the hours spent working at near (p=0.535) or intermediate (p=0.759) distances. No intersubject or strong intrasubject relationships emerged between lens preference and reading speed, NAVQ rating, halo size, aberrometry or ocular physiology (p>0.05). CONCLUSIONS: Participant lifestyle and personality, ocular optics, contact lens visual performance and ocular physiology provided poor indicators of the preferred lens type after 4 weeks of wear. This is confounded by the wide range of task visual demands of presbyopes and the limited optical differences between current multifocal contact lens designs.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
This study investigates the impact of a combined treatment of Systematic Error Correction and Repeated Reading on reading rate and errors for 18 year olds with undiagnosed reading difficulties on a Caribbean Island. In addition to direct daily measures of reading accuracy, the Reading Self Perception Scale was administered to determine whether the intervention was associated with changes in the way the student perceives himself as a reader.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Peer reviewed
Resumo:
Peer reviewed