923 resultados para Information retrieval, dysorthography, dyslexia, finite state machines, readability
Resumo:
This is a monthly column prepared by the Iowa Public Information Board to update Iowans on the IPIB’s activities and provide information on some of the issues routinely addressed by the board.
Resumo:
This is a monthly column prepared by the Iowa Public Information Board to update Iowans on the IPIB’s activities and provide information on some of the issues routinely addressed by the board.
Resumo:
This is a monthly column prepared by the Iowa Public Information Board to update Iowans on the IPIB’s activities and provide information on some of the issues routinely addressed by the board.
Resumo:
This is a monthly column prepared by the Iowa Public Information Board to update Iowans on the IPIB’s activities and provide information on some of the issues routinely addressed by the board.
Resumo:
This is a monthly column prepared by the Iowa Public Information Board to update Iowans on the IPIB’s activities and provide information on some of the issues routinely addressed by the board.
Resumo:
In this paper we propose an endpoint detection system based on the use of several features extracted from each speech frame, followed by a robust classifier (i.e Adaboost and Bagging of decision trees, and a multilayer perceptron) and a finite state automata (FSA). We present results for four different classifiers. The FSA module consisted of a 4-state decision logic that filtered false alarms and false positives. We compare the use of four different classifiers in this task. The look ahead of the method that we propose was of 7 frames, which are the number of frames that maximized the accuracy of the system. The system was tested with real signals recorded inside a car, with signal to noise ratio that ranged from 6 dB to 30dB. Finally we present experimental results demonstrating that the system yields robust endpoint detection.
Resumo:
Summary: Using WordNet in information retrieval
Resumo:
Abstract Textual autocorrelation is a broad and pervasive concept, referring to the similarity between nearby textual units: lexical repetitions along consecutive sentences, semantic association between neighbouring lexemes, persistence of discourse types (narrative, descriptive, dialogal...) and so on. Textual autocorrelation can also be negative, as illustrated by alternating phonological or morpho-syntactic categories, or the succession of word lengths. This contribution proposes a general Markov formalism for textual navigation, and inspired by spatial statistics. The formalism can express well-known constructs in textual data analysis, such as term-document matrices, references and hyperlinks navigation, (web) information retrieval, and in particular textual autocorrelation, as measured by Moran's I relatively to the exchange matrix associated to neighbourhoods of various possible types. Four case studies (word lengths alternation, lexical repulsion, parts of speech autocorrelation, and semantic autocorrelation) illustrate the theory. In particular, one observes a short-range repulsion between nouns together with a short-range attraction between verbs, both at the lexical and semantic levels. Résumé: Le concept d'autocorrélation textuelle, fort vaste, réfère à la similarité entre unités textuelles voisines: répétitions lexicales entre phrases successives, association sémantique entre lexèmes voisins, persistance du type de discours (narratif, descriptif, dialogal...) et ainsi de suite. L'autocorrélation textuelle peut être également négative, comme l'illustrent l'alternance entre les catégories phonologiques ou morpho-syntaxiques, ou la succession des longueurs de mots. Cette contribution propose un formalisme markovien général pour la navigation textuelle, inspiré par la statistique spatiale. Le formalisme est capable d'exprimer des constructions bien connues en analyse des données textuelles, telles que les matrices termes-documents, les références et la navigation par hyperliens, la recherche documentaire sur internet, et, en particulier, l'autocorélation textuelle, telle que mesurée par le I de Moran relatif à une matrice d'échange associée à des voisinages de différents types possibles. Quatre cas d'étude illustrent la théorie: alternance des longueurs de mots, répulsion lexicale, autocorrélation des catégories morpho-syntaxiques et autocorrélation sémantique. On observe en particulier une répulsion à courte portée entre les noms, ainsi qu'une attraction à courte portée entre les verbes, tant au niveau lexical que sémantique.
Resumo:
El objetivo de este proyecto es familiarizarse con las tecnologías de Semántica, entender que es una ontología y aprender a modelar una en un dominio elegido por nosotros. Realizar un parser que conectándose a la la Wikipedia y/o DBpedia rellene dicha ontología permitiendo al usuario navegar por sus conceptos y estudiar sus relaciones.
Resumo:
En el curso y ejecución de este trabajo, ahondaré en el concepto de web semántica, unarealidad cada vez más tangible, que bajo el acrónimo de web 3.0 supondrá el relevo del actual modelo web.Al tratarse de un campo de aplicación muy extenso, centraremos la temática en el diseño y populación semiautomática de ontologías, siendo estas ultimas una pieza clave en el desarrollo y el éxito potencial de las tecnologías semánticas.
Resumo:
Software de lectura y población de ontología con información de DBpedia y Wikipedia.
Resumo:
Des d'aquest TFC volem estudiar l'evolució de la Web actual cap a la Web Semàntica.
Resumo:
Purpose This paper aims to analyse various aspects of an academic social network: the profile of users, the reasons for its use, its perceived benefits and the use of other social media for scholarly purposes. Design/methodology/approach The authors examined the profiles of the users of an academic social network. The users were affiliated with 12 universities. The following were recorded for each user: sex, the number of documents uploaded, the number of followers, and the number of people being followed. In addition, a survey was sent to the individuals who had an email address in their profile. Findings Half of the users of the social network were academics and a third were PhD students. Social sciences scholars accounted for nearly half of all users. Academics used the service to get in touch with other scholars, disseminate research results and follow other scholars. Other widely employed social media included citation indexes, document creation, edition and sharing tools and communication tools. Users complained about the lack of support for the utilisation of these tools. Research limitations/implications The results are based on a single case study. Originality/value This study provides new insights on the impact of social media in academic contexts by analysing the user profiles and benefits of a social network service that is specifically targeted at the academic community.
Resumo:
This piece of work which is Identification of Research Portfolio for Development of Filtration Equipment aims at presenting a novel approach to identify promising research topics in the field of design and development of filtration equipment and processes. The projected approach consists of identifying technological problems often encountered in filtration processes. The sources of information for the problem retrieval were patent documents and scientific papers that discussed filtration equipments and processes. The problem identification method adopted in this work focussed on the semantic nature of a sentence in order to generate series of subject-action-object structures. This was achieved with software called Knowledgist. List of problems often encountered in filtration processes that have been mentioned in patent documents and scientific papers were generated. These problems were carefully studied and categorized. Suggestions were made on the various classes of these problems that need further investigation in order to propose a research portfolio. The uses and importance of other methods of information retrieval were also highlighted in this work.
Resumo:
Web-portaalien aiheenmukaista luokittelua voidaan hyödyntää tunnistamaan käyttäjän kiinnostuksen kohteet keräämällä tilastotietoa hänen selaustottumuksistaan eri kategorioissa. Tämä diplomityö käsittelee web-sovelluksien osa-alueita, joissa kerättyä tilastotietoa voidaan hyödyntää personalisoinnissa. Yleisperiaatteet sisällön personalisoinnista, Internet-mainostamisesta ja tiedonhausta selitetään matemaattisia malleja käyttäen. Lisäksi työssä kuvaillaan yleisluontoiset ominaisuudet web-portaaleista sekä tilastotiedon keräämiseen liittyvät seikat.