59 resultados para telelettura consumi idrici Smart Automatic Meter Reading acquedotto Fano (PU) ASET
Resumo:
El objetivo de PANACEA es engranar diferentes herramientas avanzadas para construir una fábrica de Recursos Lingüísticos (RL), una línea de producción que automatice los pasos implicados en la adquisición, producción, actualización y mantenimiento de los RL que la Traducción Automática y otras tecnologías lingüísticas, necesitan.
Resumo:
Automatic classification of makams from symbolic data is a rarely studied topic. In this paper, first a review of an n-gram based approach is presented using various representations of the symbolic data. While a high degree of precision can be obtained, confusion happens mainly for makams using (almost) the same scale and pitch hierarchy but differ in overall melodic progression, seyir. To further improve the system, first n-gram based classification is tested for various sections of the piece to take into account a feature of the seyir that melodic progression starts in a certain region of the scale. In a second test, a hierarchical classification structure is designed which uses n-grams and seyir features in different levels to further improve the system.
Resumo:
The objective of PANACEA is to build a factory of LRs that automates the stages involved in the acquisition, production, updating and maintenance of LRs required by MT systems and by other applications based on language technologies, and simplifies eventual issues regarding intellectual property rights. This automation will cut down the cost, time and human effort significantly. These reductions of costs and time are the only way to guarantee the continuous supply of LRs that MT and other language technologies will be demanding in the multilingual Europe.
Resumo:
Language Resources are a critical component for Natural Language Processing applications. Throughout the years many resources were manually created for the same task, but with different granularity and coverage information. To create richer resources for a broad range of potential reuses, nformation from all resources has to be joined into one. The hight cost of comparing and merging different resources by hand has been a bottleneck for merging existing resources. With the objective of reducing human intervention, we present a new method for automating merging resources. We have addressed the merging of two verbs subcategorization frame (SCF) lexica for Spanish. The results achieved, a new lexicon with enriched information and conflicting information signalled, reinforce our idea that this approach can be applied for other task of NLP.
Resumo:
This article reports on the results of the research done towards the fully automatically merging of lexical resources. Our main goal is to show the generality of the proposed approach, which have been previously applied to merge Spanish Subcategorization Frames lexica. In this work we extend and apply the same technique to perform the merging of morphosyntactic lexica encoded in LMF. The experiments showed that the technique is general enough to obtain good results in these two different tasks which is an important step towards performing the merging of lexical resources fully automatically.
Resumo:
The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to have richer resources with a broad range of potential uses for a significant number of languages.With the objective of reducing cost byeliminating human intervention, we present a new method for automating the merging of resources,with special emphasis in what we call the mapping step. This mapping step, which converts the resources into a common format that allows latter the merging, is usually performed with huge manual effort and thus makes the whole process very costly. Thus, we propose a method to perform this mapping fully automatically. To test our method, we have addressed the merging of two verb subcategorization frame lexica for Spanish, The resultsachieved, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
In this work we present the results of experimental work on the development of lexical class-based lexica by automatic means. Our purpose is to assess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help reduce the human effort required in the development of language resources significantly.
Resumo:
Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.
Resumo:
The Treatise on Quadrature of Fermat (c. 1659), besides containing the first known proof of the computation of the area under a higher parabola, R x+m/n dx, or under a higher hyperbola, R x-m/n dx with the appropriate limits of integration in each case , has a second part which was not understood by Fermat s contemporaries. This second part of the Treatise is obscure and difficult to read and even the great Huygens described it as'published with many mistakes and it is so obscure (with proofs redolent of error) that I have been unable to make any sense of it'. Far from the confusion that Huygens attributes to it, in this paper we try to prove that Fermat, in writing the Treatise, had a very clear goal in mind and he managed to attain it by means of a simple and original method. Fermat reduced the quadrature of a great number of algebraic curves to the quadrature of known curves: the higher parabolas and hyperbolas of the first part of the paper. Others, he reduced to the quadrature of the circle. We shall see how the clever use of two procedures, quite novel at the time: the change of variables and a particular case of the formulaof integration by parts, provide Fermat with the necessary tools to square very easily curves as well-known as the folium of Descartes, the cissoid of Diocles or the witch of Agnesi.
Resumo:
Comptar amb sistemes sofisticats de gestió o programes ERP (Enterprise Resource Planning) no és suficient per a les organitzacions. Per a què aquests recursos donin resultats adequats i actualitzats, la informació d’entrada ha de llegir-se de forma automàtica, per aconseguir estalviar en recursos, eliminació d’errors i assegurar el compliment de la qualitat. Per aquest motiu és important comptar amb eines i serveis d’identificació automàtica i col•lecció de dades. Els principals objectius a assolir (a partir de la introducció al lector de la importància dels sistemes logístics d’identificació en un entorn global d’alta competitivitat), són conèixer i comprendre el funcionament de les tres principals tecnologies existents al mercat (codis de barres lineals, codis de barres bidimensionals i sistemes RFID), veure en quin estat d’implantació es troba cadascuna i les seves principals aplicacions. Un cop realitzat aquest primer estudi es pretén comparar les tres tecnologies per o poder obtenir perspectives de futur en l’àmbit de l’autoidentificació. A partir de la situació actual i de les necessitats de les empreses, juntament amb el meravellós món que sembla obrir la tecnologia RFID (Radio Frequency Identification), la principal conclusió a la que s’arribarà és que malgrat les limitacions tècniques dels codis de barres lineals, aquests es troben completament integrats a tota la cadena logística gràcies a l’estandarització i la utilització d’un llenguatge comú, sota el nom de simbologies GTIN (Global Trade Item Number), durant tota la cadena de subministres que garanteixen total traçabilitat dels productes gràcies en part a la gestió de les bases de dades i del flux d’informació. La tecnologia RFUD amb l’EPC (Electronic Product Code) supera aquestes limitacions, convertint-se en el màxim candidat per a substituir els limitats codis de barres. Tot i això, RFID, amb l’EPC, no serà un adequat identificador logístic fins que es superin importants barreres, com són la falta d’estandarització i l’elevat cost d’implantació.
Resumo:
The advances of the semiconductor industry enable microelectromechanical systems sensors, signal conditioning logic and network access to be integrated into a smart sensor node. In this framework, a mixed-mode interface circuit for monolithically integrated gas sensor arrays was developed with high-level design techniques. This interface system includes analog electronics for inspection of up to four sensor arrays and digital logic for smart control and data communication. Although different design methodologies were used in the conception of the complete circuit, high-level synthesis tools and methodologies were crucial in speeding up the whole design cycle, enhancing reusability for future applications and producing a flexible and robust component.
Resumo:
The aim of this brief is to present an original design methodology that permits implementing latch-up-free smart power circuits on a very simple, cost-effective technology. The basic concept used for this purpose is letting float the wells of the MOS transistors most susceptible to initiate latch-up.
Resumo:
This aim of this article is to reflect on Michel Foucault's reading of Plutarch's Eroticus in his Histoire de la sexualité, putting emphasis on the fact that, against what it is affirmed by the French thinker, the real debate is not, in the author's opinion, about true pleasure, that obtained by the erastés from his erómenos or that obtained by husbands from their wives, but the need to assign also love and friendship (éros kaì philia) to the conjugal love.