100 resultados para Automatic Visual Word Dictionary Calculation
Resumo:
Language Resources are a critical component for Natural Language Processing applications. Throughout the years many resources were manually created for the same task, but with different granularity and coverage information. To create richer resources for a broad range of potential reuses, nformation from all resources has to be joined into one. The hight cost of comparing and merging different resources by hand has been a bottleneck for merging existing resources. With the objective of reducing human intervention, we present a new method for automating merging resources. We have addressed the merging of two verbs subcategorization frame (SCF) lexica for Spanish. The results achieved, a new lexicon with enriched information and conflicting information signalled, reinforce our idea that this approach can be applied for other task of NLP.
Resumo:
This article reports on the results of the research done towards the fully automatically merging of lexical resources. Our main goal is to show the generality of the proposed approach, which have been previously applied to merge Spanish Subcategorization Frames lexica. In this work we extend and apply the same technique to perform the merging of morphosyntactic lexica encoded in LMF. The experiments showed that the technique is general enough to obtain good results in these two different tasks which is an important step towards performing the merging of lexical resources fully automatically.
Resumo:
The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to have richer resources with a broad range of potential uses for a significant number of languages.With the objective of reducing cost byeliminating human intervention, we present a new method for automating the merging of resources,with special emphasis in what we call the mapping step. This mapping step, which converts the resources into a common format that allows latter the merging, is usually performed with huge manual effort and thus makes the whole process very costly. Thus, we propose a method to perform this mapping fully automatically. To test our method, we have addressed the merging of two verb subcategorization frame lexica for Spanish, The resultsachieved, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
In this work we present the results of experimental work on the development of lexical class-based lexica by automatic means. Our purpose is to assess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help reduce the human effort required in the development of language resources significantly.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
Treball de recerca realitzat per alumnes d’ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit científic del Jovent l’any 2010. L’objectiu del treball ha consistit a desenvolupar un programa informàtic per controlar les fases d'un procés de rentat industrial.
Resumo:
Desconocemos los mecanismos fisiopatológicos subyacentes a la aparición de alucinaciones/alucinosis visual en pacientes con ictus, su incidencia, características y valor predictivo topográfico o pronóstico. En este trabajo estudiamos prospectivamente 78 pacientes con ictus isquémico/hemorrágico agudo y ausencia de enfermedad neurodegenerativa/psiquiátrica basal o clínica alucinatoria previa, administrándoles cuestionario estandarizado sobre alucinaciones/alucinosis visual y realizándoles prueba de neuroimagen. Un subgrupo de pacientes también cuenta con EEG y evaluación neuropsicológica. La incidencia de alucinaciones/alucinosis fue del 16,7%, siendo la mayoría imágenes complejas, con presentación precoz y curso autolimitado. Se asoció con lesiones occipitales, defecto campimétrico inicial, y alteraciones del sueño entre otras variables.
Resumo:
Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques
Resumo:
The aim of this project is to accomplish an application software based on Matlab to calculate the radioelectrical coverage by surface wave of broadcast radiostations in the band of Medium Wave (WM) all around the world. Also, given the location of a transmitting and a receiving station, the software should be able to calculate the electric field that the receiver should receive at that specific site. In case of several transmitters, the program should search for the existence of Inter-Symbol Interference, and calculate the field strenght accordingly. The application should ask for the configuration parameters of the transmitter radiostation within a Graphical User Interface (GUI), and bring back the resulting coverage above a map of the area under study. For the development of this project, it has been used several conductivity databases of different countries, and a high-resolution elevation database (GLOBE). Also, to calculate the field strenght due to groundwave propagation, it has been used ITU GRWAVE program, which must be integrated into a Matlab interface to be used by the application developed.
Resumo:
A density-functional self-consistent calculation of the ground-state electronic density of quantum dots under an arbitrary magnetic field is performed. We consider a parabolic lateral confining potential. The addition energy, E(N+1)-E(N), where N is the number of electrons, is compared with experimental data and the different contributions to the energy are analyzed. The Hamiltonian is modeled by a density functional, which includes the exchange and correlation interactions and the local formation of Landau levels for different equilibrium spin populations. We obtain an analytical expression for the critical density under which spontaneous polarization, induced by the exchange interaction, takes place.
Resumo:
The relativistic distorted-wave Born approximation is used to calculate differential and total cross sections for inner shell ionization of neutral atoms by electron and positron impact. The target atom is described within the independent-electron approximation using the self-consistent Dirac-Fock-Slater potential. The distorting potential for the projectile is also set equal to the Dirac-Fock-Slater potential. For electrons, this guarantees orthogonality of all the orbitals involved and simplifies the calculation of exchange T-matrix elements. The interaction between the projectile and the target electrons is assumed to reduce to the instantaneous Coulomb interaction. The adopted numerical algorithm allows the calculation of differential and total cross sections for projectiles with kinetic energies ranging from the ionization threshold up to about ten times this value. Algorithm accuracy and stability are demonstrated by comparing differential cross sections calculated by our code with the distorting potential set to zero with equivalent results generated by a more robust code that uses the conventional plane-wave Born approximation. Sample calculation results are presented for ionization of K- and L-shells of various elements and compared with the available experimental data.
Resumo:
We derive analytical expressions for the excitation energy of the isoscalar giant monopole and quadrupole resonances in finite nuclei, by using the scaling method and the extended ThomasFermi approach to relativistic mean-field theory. We study the ability of several nonlinear σω parameter sets of common use in reproducing the experimental data. For monopole oscillations the calculations agree better with experiment when the nuclear matter incompressibility of the relativistic interaction lies in the range 220260 MeV. The breathing-mode energies of the scaling method compare satisfactorily with those obtained in relativistic RPA and time-dependent mean-field calculations. For quadrupole oscillations, all the analyzed nonlinear parameter sets reproduce the empirical trends reasonably well.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.