995 resultados para confusion matrix


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the present study, Korean-English bilingual (KEB) and Korean monolingual (KM) children, between the ages of 8 and 13 years, and KEB adults, ages 18 and older, were examined with one speech perception task, called the Nonsense Syllable Confusion Matrix (NSCM) task (Allen, 2005), and two production tasks, called the Nonsense Syllable Imitation Task (NSIT) and the Nonword Repetition Task (NRT; Dollaghan & Campbell, 1998). The present study examined (a) which English sounds on the NSCM task were identified less well, presumably due to interference from Korean phonology, in bilinguals learning English as a second language (L2) and in monolinguals learning English as a foreign language (FL); (b) which English phonemes on the NSIT were more challenging for bilinguals and monolinguals to produce; (c) whether perception on the NSCM task is related to production on the NSIT, or phonological awareness, as measured by the NRT; and (d) whether perception and production differ in three age-language status groups (i.e., KEB children, KEB adults, and KM children) and in three proficiency subgroups of KEB children (i.e., English-dominant, ED; balanced, BAL; and Korean-dominant, KD). In order to determine English proficiency in each group, language samples were extensively and rigorously analyzed, using software, called Systematic Analysis of Language Transcripts (SALT). Length of samples in complete and intelligible utterances, number of different and total words (NDW and NTW, respectively), speech rate in words per minute (WPM), and number of grammatical errors, mazes, and abandoned utterances were measured and compared among the three initial groups and the three proficiency subgroups. Results of the language sample analysis (LSA) showed significant group differences only between the KEBs and the KM children, but not between the KEB children and adults. Nonetheless, compared to normative means (from a sample length- and age-matched database provided by SALT), the KEB adult group and the KD subgroup produced English at significantly slower speech rates than expected for monolingual, English-speaking counterparts. Two existing models of bilingual speech perception and production—the Speech Learning Model or SLM (Flege, 1987, 1992) and the Perceptual Assimilation Model or PAM (Best, McRoberts, & Sithole, 1988; Best, McRoberts, & Goodell, 2001)—were considered to see if they could account for the perceptual and production patterns evident in the present study. The selected English sounds for stimuli in the NSCM task and the NSIT were 10 consonants, /p, b, k, g, f, θ, s, z, ʧ, ʤ/, and 3 vowels /I, ɛ, æ/, which were used to create 30 nonsense syllables in a consonant-vowel structure. Based on phonetic or phonemic differences between the two languages, English sounds were categorized either as familiar sounds—namely, English sounds that are similar, but not identical, to L1 Korean, including /p, k, s, ʧ, ɛ/—or unfamiliar sounds—namely, English sounds that are new to L1, including /b, g, f, θ, z, ʤ, I, æ/. The results of the NSCM task showed that (a) consonants were perceived correctly more often than vowels, (b) familiar sounds were perceived correctly more often than unfamiliar ones, and (c) familiar consonants were perceived correctly more often than unfamiliar ones across the three age-language status groups and across the three proficiency subgroups; and (d) the KEB children perceived correctly more often than the KEB adults, the KEB children and adults perceived correctly more often than the KM children, and the ED and BAL subgroups perceived correctly more often than the KD subgroup. The results of the NSIT showed (a) consonants were produced more accurately than vowels, and (b) familiar sounds were produced more accurately than unfamiliar ones, across the three age-language status groups. Also, (c) familiar consonants were produced more accurately than unfamiliar ones in the KEB and KM child groups, and (d) unfamiliar vowels were produced more accurately than a familiar one in the KEB child group, but the reverse was true in the KEB adult and KM child groups. The KEB children produced sounds correctly significantly more often than the KM children and the KEB adults, though the percent correct differences were smaller than for perception. Production differences were not found among the three proficiency subgroups. Perception on the NSCM task was compared to production on the NSIT and NRT. Weak positive correlations were found between perception and production (NSIT) for unfamiliar consonants and sounds, whereas a weak negative correlation was found for unfamiliar vowels. Several correlations were significant for perceptual performance on the NSCM task and overall production performance on the NRT: for unfamiliar consonants, unfamiliar vowels, unfamiliar sounds, consonants, vowels, and overall performance on the NSCM task. Nonetheless, no significant correlation was found between production on the NSIT and NRT. Evidently these are two very different production tasks, where immediate imitation of single syllables on the NSIT results in high performance for all groups. Findings of the present study suggest that (a) perception and production of L2 consonants differ from those of vowels; (b) perception and production of L2 sounds involve an interaction of sound type and familiarity; (c) a weak relation exists between perception and production performance for unfamiliar sounds; and (d) L2 experience generally predicts perceptual and production performance. The present study yields several conclusions. The first is that familiarity of sounds is an important influence on L2 learning, as claimed by both SLM and PAM. In the present study, familiar sounds were perceived and produced correctly more often than unfamiliar ones in most cases, in keeping with PAM, though experienced L2 learners (i.e., the KEB children) produced unfamiliar vowels better than familiar ones, in keeping with SLM. Nonetheless, the second conclusion is that neither SLM nor PAM consistently and thoroughly explains the results of the present study. This is because both theories assume that the influence of L1 on the perception of L2 consonants and vowels works in the same way as for production of them. The third and fourth conclusions are two proposed arguments: that perception and production of consonants are different than for vowels, and that sound type interacts with familiarity and L2 experience. These two arguments can best explain the current findings. These findings may help us to develop educational curricula for bilingual individuals listening to and articulating English. Further, the extensive analysis of spontaneous speech in the present study should contribute to the specification of parameters for normal language development and function in Korean-English bilingual children and adults.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a novel dexterous technique for fast and accurate recognition of online handwritten Kannada and Tamil characters. Based on the primary classifier output and prior knowledge, the best classifier is chosen from set of three classifiers for second stage classification. Prior knowledge is obtained through analysis of the confusion matrix of primary classifier which helped in identifying the multiple sets of confused characters. Further, studies were carried out to check the performance of secondary classifiers in disambiguating among the confusion sets. Using this technique we have achieved an average accuracy of 92.6% for Kannada characters on the MILE lab dataset and 90.2% for Tamil characters on the HP Labs dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Noise is one of the main factors degrading the quality of original multichannel remote sensing data and its presence influences classification efficiency, object detection, etc. Thus, pre-filtering is often used to remove noise and improve the solving of final tasks of multichannel remote sensing. Recent studies indicate that a classical model of additive noise is not adequate enough for images formed by modern multichannel sensors operating in visible and infrared bands. However, this fact is often ignored by researchers designing noise removal methods and algorithms. Because of this, we focus on the classification of multichannel remote sensing images in the case of signal-dependent noise present in component images. Three approaches to filtering of multichannel images for the considered noise model are analysed, all based on discrete cosine transform in blocks. The study is carried out not only in terms of conventional efficiency metrics used in filtering (MSE) but also in terms of multichannel data classification accuracy (probability of correct classification, confusion matrix). The proposed classification system combines the pre-processing stage where a DCT-based filter processes the blocks of the multichannel remote sensing image and the classification stage. Two modern classifiers are employed, radial basis function neural network and support vector machines. Simulations are carried out for three-channel image of Landsat TM sensor. Different cases of learning are considered: using noise-free samples of the test multichannel image, the noisy multichannel image and the pre-filtered one. It is shown that the use of the pre-filtered image for training produces better classification in comparison to the case of learning for the noisy image. It is demonstrated that the best results for both groups of quantitative criteria are provided if a proposed 3D discrete cosine transform filter equipped by variance stabilizing transform is applied. The classification results obtained for data pre-filtered in different ways are in agreement for both considered classifiers. Comparison of classifier performance is carried out as well. The radial basis neural network classifier is less sensitive to noise in original images, but after pre-filtering the performance of both classifiers is approximately the same.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Remotely sensed land cover maps are increasingly used as inputs into environmental simulation models whose outputs inform decisions and policy-making. Risks associated with these decisions are dependent on model output uncertainty, which is in turn affected by the uncertainty of land cover inputs. This article presents a method of quantifying the uncertainty that results from potential mis-classification in remotely sensed land cover maps. In addition to quantifying uncertainty in the classification of individual pixels in the map, we also address the important case where land cover maps have been upscaled to a coarser grid to suit the users’ needs and are reported as proportions of land cover type. The approach is Bayesian and incorporates several layers of modelling but is straightforward to implement. First, we incorporate data in the confusion matrix derived from an independent field survey, and discuss the appropriate way to model such data. Second, we account for spatial correlation in the true land cover map, using the remotely sensed map as a prior. Third, spatial correlation in the mis-classification characteristics is induced by modelling their variance. The result is that we are able to simulate posterior means and variances for individual sites and the entire map using a simple Monte Carlo algorithm. The method is applied to the Land Cover Map 2000 for the region of England and Wales, a map used as an input into a current dynamic carbon flux model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, hidden Markov models (HMM) is studied for spike sorting. We notice that HMM state sequences have capability to represent spikes precisely and concisely. We build a HMM for spikes, where HMM states respect spike significant shape variations. Four shape variations are introduced: silence, going up, going down and peak. They constitute every spike with an underlying probabilistic dependence that is modelled by HMM. Based on this representation, spikes sorting becomes a classification problem of compact HMM state sequences. In addition, we enhance the method by defining HMM on extracted Cepstrum features, which improves the accuracy of spike sorting. Simulation results demonstrate the effectiveness of the proposed method as well as the efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The recent upsurge in microbial genome data has revealed that hemoglobin-like (HbL) proteins may be widely distributed among bacteria and that some organisms may carry more than one HbL encoding gene. However, the discovery of HbL proteins has been limited to a small number of bacteria only. This study describes the prediction of HbL proteins and their domain classification using a machine learning approach. Support vector machine (SVM) models were developed for predicting HbL proteins based upon amino acid composition (AC), dipeptide composition (DC), hybrid method (AC + DC), and position specific scoring matrix (PSSM). In addition, we introduce for the first time a new prediction method based on max to min amino acid residue (MM) profiles. The average accuracy, standard deviation (SD), false positive rate (FPR), confusion matrix, and receiver operating characteristic (ROC) were analyzed. We also compared the performance of our proposed models in homology detection databases. The performance of the different approaches was estimated using fivefold cross-validation techniques. Prediction accuracy was further investigated through confusion matrix and ROC curve analysis. All experimental results indicate that the proposed BacHbpred can be a perspective predictor for determination of HbL related proteins. BacHbpred, a web tool, has been developed for HbL prediction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study uses several measures derived from the error matrix for comparing two thematic maps generated with the same sample set. The reference map was generated with all the sample elements and the map set as the model was generated without the two points detected as influential by the analysis of local influence diagnostics. The data analyzed refer to the wheat productivity in an agricultural area of 13.55 ha considering a sampling grid of 50 x 50 m comprising 50 georeferenced sample elements. The comparison measures derived from the error matrix indicated that despite some similarity on the maps, they are different. The difference between the estimated production by the reference map and the actual production was of 350 kilograms. The same difference calculated with the mode map was of 50 kilograms, indicating that the study of influential points is of fundamental importance to obtain a more reliable estimative and use of measures obtained from the error matrix is a good option to make comparisons between thematic maps.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study uses several measures derived from the error matrix for comparing two thematic maps generated with the same sample set. The reference map was generated with all the sample elements and the map set as the model was generated without the two points detected as influential by the analysis of local influence diagnostics. The data analyzed refer to the wheat productivity in an agricultural area of 13.55 ha considering a sampling grid of 50 x 50 m comprising 50 georeferenced sample elements. The comparison measures derived from the error matrix indicated that despite some similarity on the maps, they are different. The difference between the estimated production by the reference map and the actual production was of 350 kilograms. The same difference calculated with the mode map was of 50 kilograms, indicating that the study of influential points is of fundamental importance to obtain a more reliable estimative and use of measures obtained from the error matrix is a good option to make comparisons between thematic maps.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acquired brain injury (ABI) is one of the leading causes of death and disability in the world and is associated with high health care costs as a result of the acute treatment and long term rehabilitation involved. Different algorithms and methods have been proposed to predict the effectiveness of rehabilitation programs. In general, research has focused on predicting the overall improvement of patients with ABI. The purpose of this study is the novel application of data mining (DM) techniques to predict the outcomes of cognitive rehabilitation in patients with ABI. We generate three predictive models that allow us to obtain new knowledge to evaluate and improve the effectiveness of the cognitive rehabilitation process. Decision tree (DT), multilayer perceptron (MLP) and general regression neural network (GRNN) have been used to construct the prediction models. 10-fold cross validation was carried out in order to test the algorithms, using the Institut Guttmann Neurorehabilitation Hospital (IG) patients database. Performance of the models was tested through specificity, sensitivity and accuracy analysis and confusion matrix analysis. The experimental results obtained by DT are clearly superior with a prediction average accuracy of 90.38%, while MLP and GRRN obtained a 78.7% and 75.96%, respectively. This study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diabetes is the most common disease nowadays in all populations and in all age groups. diabetes contributing to heart disease, increases the risks of developing kidney disease, blindness, nerve damage, and blood vessel damage. Diabetes disease diagnosis via proper interpretation of the diabetes data is an important classification problem. Different techniques of artificial intelligence has been applied to diabetes problem. The purpose of this study is apply the artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining (DM) technique for the diabetes disease diagnosis. The Pima Indians diabetes was used to test the proposed model AMMLP. The results obtained by AMMLP were compared with decision tree (DT), Bayesian classifier (BC) and other algorithms, recently proposed by other researchers, that were applied to the same database. The robustness of the algorithms are examined using classification accuracy, analysis of sensitivity and specificity, confusion matrix. The results obtained by AMMLP are superior to obtained by DT and BC.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.