9 resultados para LVQ.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis aims to present a color segmentation approach for traffic sign recognition based on LVQ neural networks. The RGB images were converted into HSV color space, and segmented using LVQ depending on the hue and saturation values of each pixel in the HSV color space. LVQ neural network was used to segment red, blue and yellow colors on the road and traffic signs to detect and recognize them. LVQ was effectively applied to 536 sampled images taken from different countries in different conditions with 89% accuracy and the execution time of each image among 31 images was calculated in between 0.726sec to 0.844sec. The method was tested in different environmental conditions and LVQ showed its capacity to reasonably segment color despite remarkable illumination differences. The results showed high robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parkinson's disease (PD) is a degenerative illness whose cardinal symptoms include rigidity, tremor, and slowness of movement. In addition to its widely recognized effects PD can have a profound effect on speech and voice.The speech symptoms most commonly demonstrated by patients with PD are reduced vocal loudness, monopitch, disruptions of voice quality, and abnormally fast rate of speech. This cluster of speech symptoms is often termed Hypokinetic Dysarthria.The disease can be difficult to diagnose accurately, especially in its early stages, due to this reason, automatic techniques based on Artificial Intelligence should increase the diagnosing accuracy and to help the doctors make better decisions. The aim of the thesis work is to predict the PD based on the audio files collected from various patients.Audio files are preprocessed in order to attain the features.The preprocessed data contains 23 attributes and 195 instances. On an average there are six voice recordings per person, By using data compression technique such as Discrete Cosine Transform (DCT) number of instances can be minimized, after data compression, attribute selection is done using several WEKA build in methods such as ChiSquared, GainRatio, Infogain after identifying the important attributes, we evaluate attributes one by one by using stepwise regression.Based on the selected attributes we process in WEKA by using cost sensitive classifier with various algorithms like MultiPass LVQ, Logistic Model Tree(LMT), K-Star.The classified results shows on an average 80%.By using this features 95% approximate classification of PD is acheived.This shows that using the audio dataset, PD could be predicted with a higher level of accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adsorption of Reactive Blue 19 dye onto activated red mud was investigated. Red mud was treated with hydrogen peroxide (LVQ) and heated at both 400 °C (LVQ400) and 500 °C (LVQ500). These samples were characterized by pH, specific surface area, point of zero charge and mineralogical composition. Adsorption was found to be significantly dependent on solution pH, with acidic conditions proving to be the most favorable. The adsorption followed pseudo-second-order kinetics. The Langmuir isotherm was the most appropriate to describe the phenomenon of dye removal using LVQ, LVQ400 and LVQ500, with maximum adsorption capacity of 384.62, 357.14 and 454.54 mg g-1, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION Post-mortem cardiac MR exams present with different contraction appearances of the left ventricle in cardiac short axis images. It was hypothesized that the grade of post-mortem contraction may be related to the post-mortem interval (PMI) or cause of death and a phenomenon caused by internal rigor mortis that may give further insights in the circumstances of death. METHOD AND MATERIALS The cardiac contraction grade was investigated in 71 post-mortem cardiac MR exams (mean age at death 52y, range 12-89y; 48 males, 23 females). In cardiac short axis images the left ventricular lumen volume as well as the left ventricular myocardial volume were assessed by manual segmentation. The quotient of both (LVQ) represents the grade of myocardial contraction. LVQ was correlated to the PMI, sex, age, cardiac weight, body mass and height, cause of death and pericardial tamponade when present. In cardiac causes of death a separate correlation was investigated for acute myocardial infarction cases and arrhythmic deaths. RESULTS LVQ values ranged from 1.99 (maximum dilatation) to 42.91 (maximum contraction) with a mean of 15.13. LVQ decreased slightly with increasing PMI, however without significant correlation. Pericardial tamponade positively correlated with higher LVQ values. Variables such as sex, age, body mass and height, cardiac weight and cause of death did not correlate with LVQ values. There was no difference in LVQ values for myocardial infarction without tamponade and arrhythmic deaths. CONCLUSION Based on the observation in our investigated cases, the phenomenon of post-mortem myocardial contraction cannot be explained by the influence of the investigated variables, except for pericardial tamponade cases. Further research addressing post-mortem myocardial contraction has to focus on other, less obvious factors, which may influence the early post-mortem phase too.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El incremento de la esperanza de vida en los países desarrollados (más de 80 años en 2013), está suponiendo un crecimiento considerable en la incidencia y prevalencia de enfermedades discapacitantes, que si bien pueden aparecer a edades tempranas, son más frecuentes en la tercera edad, o en sus inmediaciones. Enfermedades neuro-degenerativas que suponen un gran hándicap funcional, pues algunas de ellas están asociadas a movimientos involuntarios de determinadas partes del cuerpo, sobre todo de las extremidades. Tareas cotidianas como la ingesta de alimento, vestirse, escribir, interactuar con el ordenador, etc… pueden llegar a ser grandes retos para las personas que las padecen. El diagnóstico precoz y certero resulta fundamental para la prescripción de la terapia o tratamiento óptimo. Teniendo en cuenta incluso que en muchos casos, por desgracia la mayoría, sólo se puede actuar para mitigar los síntomas, y no para sanarlos, al menos de momento. Aun así, acertar de manera temprana en el diagnóstico supone proporcionar al enfermo una mayor calidad de vida durante mucho más tiempo, por lo cual el esfuerzo merece, y mucho, la pena. Los enfermos de Párkinson y de temblor esencial suponen un porcentaje importante de la casuística clínica en los trastornos del movimiento que impiden llevar una vida normal, que producen una discapacidad física y una no menos importante exclusión social. Las vías de tratamiento son dispares de ahí que sea crítico acertar en el diagnóstico lo antes posible. Hasta la actualidad, los profesionales y expertos en medicina, utilizan unas escalas cualitativas para diferenciar la patología y su grado de afectación. Dichas escalas también se utilizan para efectuar un seguimiento clínico y registrar la historia del paciente. En esta tesis se propone una serie de métodos de análisis y de identificación/clasificación de los tipos de temblor asociados a la enfermedad de Párkinson y el temblor esencial. Empleando técnicas de inteligencia artificial basadas en clasificadores inteligentes: redes neuronales (MLP y LVQ) y máquinas de soporte vectorial (SVM), a partir del desarrollo e implantación de un sistema para la medida y análisis objetiva del temblor: DIMETER. Dicho sistema además de ser una herramienta eficaz para la ayuda al diagnóstico, presenta también las capacidades necesarias para proporcionar un seguimiento riguroso y fiable de la evolución de cada paciente. ABSTRACT The increase in life expectancy in developed countries in more than 80 years (data belongs to 2013), is assuming considerable growth in the incidence and prevalence of disabling diseases. Although they may appear at an early age, they are more common in the elderly ages or in its vicinity. Nuero-degenerative diseases that are a major functional handicap, as some of them are associated with involuntary movements of certain body parts, especially of the limbs. Everyday tasks such as food intake, dressing, writing, interact with the computer, etc ... can become large debris for people who suffer. Early and accurate diagnosis is crucial for prescribing optimal therapy or treatment. Even taking into account that in many cases, unfortunately the majority, can only act to mitigate the symptoms, not to cure them, at least for now. Nevertheless, early diagnosis may provide the patient a better quality of life for much longer time, so the effort is worth, and much, grief. Sufferers of Parkinson's and essential tremor represent a significant percentage of clinical casuistry in movement disorders that prevent a normal life, leading to physical disability and not least social exclusion. There are various treatment methods, which makes it necessary the immediate diagnosis. Up to date, professionals and medical experts, use a qualitative scale to differentiate the disease and degree of involvement. Therefore, those scales are used in clinical follow-up. In this thesis, several methods of analysis and identification / classification of types of tremor associated with Parkinson's disease and essential tremor are proposed. Using artificial intelligence techniques based on intelligent classification: neural networks (MLP and LVQ) and support vector machines (SVM), starting from the development and implementation of a system for measuring and objective analysis of the tremor: DIMETER. This system besides being an effective tool to aid diagnosis, it also has the necessary capabilities to provide a rigorous and reliable monitoring of the evolution of each patient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo del presente estudio es describir y comparar los porcentajes de no cumplimentación de dos instrumentos de registro: hoja circulante (HC) y lista de verificación quirúrgica (LVQ), en un mismo entorno quirúrgico para una muestra de pacientes de características similares. Metodología: Estudio descriptivo realizado sobre registros intraquirúrgicos de 3024 pacientes de Cirugía de Ortopedia y Traumatología. 1732 pacientes intervenidos en 2009 con modelo de hoja circulante, cumplimentada al finalizar la intervención y 1292 en 2010 intervenidos con modelo de registro lista de verificación quirúrgica (checklist) cumplimentado durante la intervención en tres tiempos. Se han calculado características descriptivas (media, desviación típica, mínimo y máximo) del porcentaje de no cumplimentación global en ambos registros y el porcentaje de no cumplimentación (intervalo de confianza al 95%) de cada ítem de los registros estudiados. Resultados: Se observa mayor porcentaje de cumplimentación global y, en general, también individual, en la hoja circulante que en la lista de verificación quirúrgica. Conclusiones: El registro intraquirúrgico que mayor porcentaje de cumplimentación ha tenido de manera global ha sido la hoja de circulante y se evidencia la necesidad de implantar estrategias para mejorar el grado de cumplimentación de la LVQ por su relación con la seguridad de pacientes.