88 resultados para automatic particle picking
Resumo:
The objective of PANACEA is to build a factory of LRs that automates the stages involved in the acquisition, production, updating and maintenance of LRs required by MT systems and by other applications based on language technologies, and simplifies eventual issues regarding intellectual property rights. This automation will cut down the cost, time and human effort significantly. These reductions of costs and time are the only way to guarantee the continuous supply of LRs that MT and other language technologies will be demanding in the multilingual Europe.
Resumo:
Language Resources are a critical component for Natural Language Processing applications. Throughout the years many resources were manually created for the same task, but with different granularity and coverage information. To create richer resources for a broad range of potential reuses, nformation from all resources has to be joined into one. The hight cost of comparing and merging different resources by hand has been a bottleneck for merging existing resources. With the objective of reducing human intervention, we present a new method for automating merging resources. We have addressed the merging of two verbs subcategorization frame (SCF) lexica for Spanish. The results achieved, a new lexicon with enriched information and conflicting information signalled, reinforce our idea that this approach can be applied for other task of NLP.
Resumo:
This article reports on the results of the research done towards the fully automatically merging of lexical resources. Our main goal is to show the generality of the proposed approach, which have been previously applied to merge Spanish Subcategorization Frames lexica. In this work we extend and apply the same technique to perform the merging of morphosyntactic lexica encoded in LMF. The experiments showed that the technique is general enough to obtain good results in these two different tasks which is an important step towards performing the merging of lexical resources fully automatically.
Resumo:
The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to have richer resources with a broad range of potential uses for a significant number of languages.With the objective of reducing cost byeliminating human intervention, we present a new method for automating the merging of resources,with special emphasis in what we call the mapping step. This mapping step, which converts the resources into a common format that allows latter the merging, is usually performed with huge manual effort and thus makes the whole process very costly. Thus, we propose a method to perform this mapping fully automatically. To test our method, we have addressed the merging of two verb subcategorization frame lexica for Spanish, The resultsachieved, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
In this work we present the results of experimental work on the development of lexical class-based lexica by automatic means. Our purpose is to assess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help reduce the human effort required in the development of language resources significantly.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.
Resumo:
The aim of this work was to determine whether the filters used in microirrigation systems can remove potentially emitter-clogging particles. The particle size and volume distributions of different effluents and their filtrates were established, and the efficiency of the removal of these particles and total suspended solids by screen, disc and sand filters determined. In most of the effluents and filtrates, the number of particles with a diameter > 20 μm was minimal. By analysing the particle volume distribution it was found that particles larger than the disc and screen filter pores appeared in the filtrates. However, the sand filter was able to retain particles larger than the pore size. The filtration efficiency depended more on the type of effluent than on the filter. It was also found that the particle size distribution followed a potential law. Analysis of the β exponents showed that the filters did not significantly modify the particle size distribution of the effluents
Resumo:
Bulk and single-particle properties of hot hyperonic matter are studied within the Brueckner-Hartree-Fock approximation extended to finite temperature. The bare interaction in the nucleon sector is the Argonne V18 potential supplemented with an effective three-body force to reproduce the saturating properties of nuclear matter. The modern Nijmegen NSC97e potential is employed for the hyperon-nucleon and hyperon-hyperon interactions. The effect of temperature on the in-medium effective interaction is found to be, in general, very small and the single-particle potentials differ by at most 25% for temperatures in the range from 0 to 60 MeV. The bulk properties of infinite matter of baryons, either nuclear isospin symmetric or a Beta-stable composition that includes a nonzero fraction of hyperons, are obtained. It is found that the presence of hyperons can modify the thermodynamical properties of the system in a non-negligible way.
Resumo:
In fluid dynamical models the freeze-out of particles across a three-dimensional space-time hypersurface is discussed. The calculation of final momentum distribution of emitted particles is described for freeze-out surfaces, with both spacelike and timelike normals, taking into account conservation laws across the freeze-out discontinuity.
Resumo:
The distribution of single-particle strength in nuclear matter is calculated for a realistic nucleon-nucleon interaction. The influence of the short-range repulsion and the tensor component of the nuclear force on the spectral functions is to move approximately 13% of the total strength for all single-particle states beyond 100 MeV into the particle domain. This result is related to the abundantly observed quenching phenomena in nuclei which include the reduction of spectroscopic factors observed in (e,ep) reactions and the missing strength in low energy response functions.
Resumo:
An extension of the self-consistent field approach formulation by Cohen in the preceding paper is proposed in order to include the most general kind of two-body interactions, i.e., interactions depending on position, momenta, spin, isotopic spin, etc. The dielectric function is replaced by a dielectric matrix. The evaluation of the energies involves the computation of a matrix inversion and trace.