995 resultados para news extraction
Resumo:
Speech signals are one of the most important means of communication among the human beings. In this paper, a comparative study of two feature extraction techniques are carried out for recognizing speaker independent spoken isolated words. First one is a hybrid approach with Linear Predictive Coding (LPC) and Artificial Neural Networks (ANN) and the second method uses a combination of Wavelet Packet Decomposition (WPD) and Artificial Neural Networks. Voice signals are sampled directly from the microphone and then they are processed using these two techniques for extracting the features. Words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. Training, testing and pattern recognition are performed using Artificial Neural Networks. Back propagation method is used to train the ANN. The proposed method is implemented for 50 speakers uttering 20 isolated words each. Both the methods produce good recognition accuracy. But Wavelet Packet Decomposition is found to be more suitable for recognizing speech because of its multi-resolution characteristics and efficient time frequency localizations
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Resumo:
Speech is a natural mode of communication for people and speech recognition is an intensive area of research due to its versatile applications. This paper presents a comparative study of various feature extraction methods based on wavelets for recognizing isolated spoken words. Isolated words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. This work includes two speech recognition methods. First one is a hybrid approach with Discrete Wavelet Transforms and Artificial Neural Networks and the second method uses a combination of Wavelet Packet Decomposition and Artificial Neural Networks. Features are extracted by using Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Training, testing and pattern recognition are performed using Artificial Neural Networks (ANN). The proposed method is implemented for 50 speakers uttering 20 isolated words each. The experimental results obtained show the efficiency of these techniques in recognizing speech
Resumo:
Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.
Resumo:
Pseudomonas aeruginosa MCCB 123 was grown in a synthetic medium for β-1,3 glucanase production. From the culture filtrate, β-1,3 glucanase was purified with a molecular mass of 45 kDa. The enzyme was a metallozyme as its β-1,3 glucanase activity got inhibited by the metal chelator EDTA. Optimum pH and temperature for β-1,3 glucanase activity on laminarin was found to be 7 and 50 °C respectively. The MCCB 123 β-1,3 glucanase was found to have good lytic action on a wide range of fungal isolates, and hence its application in fungal DNA extraction was evaluated. β-1,3 glucanase purified from the culture supernatant of P. aeruginosa MCCB 123 could be used for the extraction of fungal DNA without the addition of any other reagents generally used. Optimum pH and temperature of enzyme for fungal DNA extraction was found to be 7 and 65 °C respectively. This is the first report on β-1,3 glucanase employed in fungal DNA extraction
Resumo:
Newspapers cover a large amount of information everyday on topics of varied interests. To a university, newspapers are essential components of communication as they cover various happenings in a university. These items of information are neither stored properly nor put in retrieval systems for future use. The news and views appeared in newspapers can effectively be organized in a digital library making use of open source software. The CUSAT digital library (http://dspace.cusat.ac.in/dspace/) has organized some news items that appeared in local newspapers about the university under a special community named “CUSAT-News”. This article describes the methods of collecting, selecting, organizing, providing access and preserving news items required by a university using DSpace open source software.
Resumo:
Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases
Resumo:
To study the complex formation of group 5 elements (Nb, Ta, Ha, and pseudoanalog Pa) in aqueous HCI solutions of medium and high concentrations the electronic structures of anionic complexes of these elements [MCl_6]^-, [MOCl_4]^-, [M(OH)-2 Cl_4]^-, and [MOCl_5]^2- have been calculated using the relativistic Dirac-Slater Discrete-Variational Method. The charge density distribution analysis has shown that tantalum occupies a specific position in the group and has the highest tendency to form the pure halide complex, [TaCl_6-. This fact along with a high covalency of this complex explains its good extractability into aliphatic amines. Niobium has equal trends to form pure halide [NbCl_6]^- and oxyhalide [NbOCl_5]^2- species at medium and high acid concentrations. Protactinium has a slight preference for the [PaOCl_5]^2- form or for the pure halide complexes with coordination number higher than 6 under these conditions. Element 105 at high HCl concentrations will have a preference to form oxyhalide anionic complex [HaOCl_5]^2- rather than [HaCl_6]^-. For the same sort of anionic oxychloride complexes an estimate has been done of their partition between the organic and aqueous phases in the extraction by aliphatic amines, which shows the following succession of the partition coefficients: P_Nb < P_Ha < P_Pa.
Resumo:
The R-package “compositions”is a tool for advanced compositional analysis. Its basic functionality has seen some conceptual improvement, containing now some facilities to work with and represent ilr bases built from balances, and an elaborated subsys- tem for dealing with several kinds of irregular data: (rounded or structural) zeroes, incomplete observations and outliers. The general approach to these irregularities is based on subcompositions: for an irregular datum, one can distinguish a “regular” sub- composition (where all parts are actually observed and the datum behaves typically) and a “problematic” subcomposition (with those unobserved, zero or rounded parts, or else where the datum shows an erratic or atypical behaviour). Systematic classification schemes are proposed for both outliers and missing values (including zeros) focusing on the nature of irregularities in the datum subcomposition(s). To compute statistics with values missing at random and structural zeros, a projection approach is implemented: a given datum contributes to the estimation of the desired parameters only on the subcompositon where it was observed. For data sets with values below the detection limit, two different approaches are provided: the well-known imputation technique, and also the projection approach. To compute statistics in the presence of outliers, robust statistics are adapted to the characteristics of compositional data, based on the minimum covariance determinant approach. The outlier classification is based on four different models of outlier occur- rence and Monte-Carlo-based tests for their characterization. Furthermore the package provides special plots helping to understand the nature of outliers in the dataset. Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator, robustness, rounded zeros
Resumo:
La presente investigación analiza el papel que desempeñó la cadena Fox News Channel en el diseño de la política exterior de los Estados Unidos, frente a la iniciativa por el reconocimiento de Palestina ante las Naciones Unidas, en septiembre del 2011. Para ello se realizó una articulación teórica, a través de la cual se explica la importancia que juegan los medios de comunicación en el diseño de la política exterior y la forma en cómo estos, pueden lograr influir en este proceso.
Resumo:
Real-time geoparsing of social media streams (e.g. Twitter, YouTube, Instagram, Flickr, FourSquare) is providing a new 'virtual sensor' capability to end users such as emergency response agencies (e.g. Tsunami early warning centres, Civil protection authorities) and news agencies (e.g. Deutsche Welle, BBC News). Challenges in this area include scaling up natural language processing (NLP) and information retrieval (IR) approaches to handle real-time traffic volumes, reducing false positives, creating real-time infographic displays useful for effective decision support and providing support for trust and credibility analysis using geosemantics. I will present in this seminar on-going work by the IT Innovation Centre over the last 4 years (TRIDEC and REVEAL FP7 projects) in building such systems, and highlights our research towards improving trustworthy and credible of crisis map displays and real-time analytics for trending topics and influential social networks during major news worthy events.
Resumo:
Bay 9 are hoping to pioneer a way to encourage postgrads and staff in the lab to get over the fear of presenting their work to the group. The members of the bay will each give a 6m40s Pecha Kucha explaining their current research work through pictures. The topics of the pecha kuchas are: - Citizen Participation in News: An analysis of the landscape of online journalism (Jonny) - Argumentation on the Social Web (Tom) - From Narrative Systems to Ubiquitous Computing for Psychology - and everything in between (Charlie) - Is it worth sharing user model data? (Rikki)
Resumo:
Resumen tomado de la publicación. Incluye capturas de pantalla del ordenador de dichas aplicaciones
Resumo:
El objetivo de este escrito es tratar de divulgar entre el profesorado del área de lengua y literatura y los miembros de la Sociedad Española de Didáctica de la Lengua y la Literatura algunas nociones básicas sobre las posibilidades de uso que ofrece Internet: correo electrónico, grupos de noticias, transferencia de archivos y World Wide Web (WWW).