959 resultados para VHDL (Computer hardware description language)
Resumo:
La fibrillation auriculaire, l'arythmie la plus fréquente en clinique, affecte 2.3 millions de patients en Amérique du Nord. Pour en étudier les mécanismes et les thérapies potentielles, des modèles animaux de fibrillation auriculaire ont été développés. La cartographie électrique épicardique à haute densité est une technique expérimentale bien établie pour suivre in vivo l'activité des oreillettes en réponse à une stimulation électrique, à du remodelage, à des arythmies ou à une modulation du système nerveux autonome. Dans les régions qui ne sont pas accessibles par cartographie épicardique, la cartographie endocardique sans contact réalisée à l'aide d'un cathéter en forme de ballon pourrait apporter une description plus complète de l'activité auriculaire. Dans cette étude, une expérience chez le chien a été conçue et analysée. Une reconstruction électro-anatomique, une cartographie épicardique (103 électrodes), une cartographie endocardique sans contact (2048 électrodes virtuelles calculées à partir un cathéter en forme de ballon avec 64 canaux) et des enregistrements endocardiques avec contact direct ont été réalisés simultanément. Les systèmes d'enregistrement ont été également simulés dans un modèle mathématique d'une oreillette droite de chien. Dans les simulations et les expériences (après la suppression du nœud atrio-ventriculaire), des cartes d'activation ont été calculées pendant le rythme sinusal. La repolarisation a été évaluée en mesurant l'aire sous l'onde T auriculaire (ATa) qui est un marqueur de gradient de repolarisation. Les résultats montrent un coefficient de corrélation épicardique-endocardique de 0.8 (expérience) and 0.96 (simulation) entre les cartes d'activation, et un coefficient de corrélation de 0.57 (expérience) and 0.92 (simulation) entre les valeurs de ATa. La cartographie endocardique sans contact apparait comme un instrument expérimental utile pour extraire de l'information en dehors des régions couvertes par les plaques d'enregistrement épicardique.
Resumo:
Le dictionnaire LVF (Les Verbes Français) de J. Dubois et F. Dubois-Charlier représente une des ressources lexicales les plus importantes dans la langue française qui est caractérisée par une description sémantique et syntaxique très pertinente. Le LVF a été mis disponible sous un format XML pour rendre l’accès aux informations plus commode pour les applications informatiques telles que les applications de traitement automatique de la langue française. Avec l’émergence du web sémantique et la diffusion rapide de ses technologies et standards tels que XML, RDF/RDFS et OWL, il serait intéressant de représenter LVF en un langage plus formalisé afin de mieux l’exploiter par les applications du traitement automatique de la langue ou du web sémantique. Nous en présentons dans ce mémoire une version ontologique OWL en détaillant le processus de transformation de la version XML à OWL et nous en démontrons son utilisation dans le domaine du traitement automatique de la langue avec une application d’annotation sémantique développée dans GATE.
Resumo:
This work describes a methodology for converting a specialized dictionary into a learner’s dictionary. The dictionary to which we apply our conversion method is the DiCoInfo, Dictionnaire fondamental de l’informatique et de l’Internet. We focus on changes affecting the presentation of data categories. What is meant by specialized dictionary for learners, in our case, is a dictionary covering the field of computer science and Internet meeting our users’ needs in communicative and cognitive situations. Our dictionary is aimed at learners’ of the computing language. We start by presenting a detailed description of four dictionaries for learners. We explain how the observations made on these resources have helped us in developing our methodology.In order to develop our methodology, first, based on Bergenholtz and Tarp’s works (Bergenholtz 2003; Tarp 2008; Fuertes Olivera and Tarp 2011), we defined the type of users who may use our dictionary. Translators are our first intended users. Other users working in the fields related to translation are also targeted: proofreaders, technical writers, interpreters. We also determined the use situations of our dictionary. It aims to assist the learners in solving text reception and text production problems (communicative situations) and in studying the terminology of computing (cognitive situations). Thus, we could establish its lexicographical functions: communicative and cognitive functions. Then, we extracted 50 articles from the DiCoInfo to which we applied a number of changes in different aspects: the layout, the presentation of data, the navigation and the use of multimedia. The changes were made according to two fundamental parameters: 1) simplification of the presentation; 2) lexicographic functions (which include the intended users and user’s situations). In this way, we exploited the widgets offered by the technology to update the interface and layout. Strategies have been developed to organize a large number of lexical links in a simpler way. We associated these links with examples showing their use in specific contexts. Multimedia as audio pronunciation and illustrations has been used.
Resumo:
La présente étude s’inscrit dans une lignée de travaux de recherche en traductologie réalisés dans un cadre de sémantique cognitive et visant à dégager les modes de conceptualisation métaphorique dans les domaines de spécialité, et plus précisément dans les sciences biomédicales. Notre étude se concentre sur les modes de conceptualisation métaphorique utilisés en neuroanatomie en français, en anglais et en allemand, dans une perspective d’application à la traduction. Nous nous penchons plus spécifiquement sur la description anatomique de deux structures du système nerveux central : la moelle spinale et le cervelet. Notre objectif est de repérer et de caractériser les indices de conceptualisation métaphorique (ICM). Notre méthode s'appuie sur un corpus trilingue de textes de référence traitant de ces structures et fait appel à une annotation sémantique en langage XML, ce qui autorise une interrogation des corpus annotés au moyen du langage XQuery. Nous mettons en évidence que les ICM jouent un rôle prédominant dans la phraséologie et les dénominations propres à la description anatomique du système nerveux, comme c'est le cas en biologie cellulaire et en anatomie des muscles, des nerfs périphériques et des vaisseaux sanguins. Sous l’angle lexical, il faut distinguer les ICM prédicatifs, les ICM non prédicatifs ainsi que les ICM quasi prédicatifs. La plupart des modes de conceptualisation métaphorique préalablement repérés en biologie cellulaire et en anatomie sont également présents dans le domaine plus spécifique de la neuroanatomie. Certains ICM et modes de conceptualisation sont toutefois spécifiques à des éléments des régions étudiées. Par ailleurs, les modes de conceptualisation métaphorique en français, en anglais et en allemand sont semblables, mais sont exprimés par des réseaux lexicaux d'ICM dont la richesse varie. De plus, la composition nominale étant une des caractéristiques de l'allemand, la forme linguistique des ICM présente des caractéristiques spécifiques. Nos résultats mettent en évidence la richesse métaphorique de la neuroanatomie. Cohérents avec les résultats des études antérieures, ils enrichissent cependant la typologie des ICM et soulignent la complexité, sur les plans lexical et cognitif, de la métaphore conceptuelle.
Resumo:
L’objectif de ce mémoire est de démontrer le rôle important de la langue dans la pièce de théâtre Death and the King’s Horseman par l’auteur nigérian Wole Soyinka. Le premier chapitre traite les implications de l'écriture d'un texte postcolonial dans la langue anglaise et revisite les débats linguistiques des années 1950 et 1960. En plus de l'anglais, ce mémoire observe l'utilisation d'autres formes de communication telles que l'anglais, le pidgin nigérian, les dialectes locaux et les métaphores Yoruba. Par conséquent, l'intersection entre la langue et la culture devient évidente à travers la description des rituels. La dernière partie de ce mémoire explore l'objectif principal de Soyinka de créer une «essence thrénodique». Avec l'utilisation de masques rituels, de la danse et de la musique, il développe un type de dialogue qui dépasse les limites de la forme écrite et est accessible seulement à ceux qui sont équipés de sensibilités culturelles Yoruba.
Resumo:
This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications.
Resumo:
A new procedure for the classification of lower case English language characters is presented in this work . The character image is binarised and the binary image is further grouped into sixteen smaller areas ,called Cells . Each cell is assigned a name depending upon the contour present in the cell and occupancy of the image contour in the cell. A data reduction procedure called Filtering is adopted to eliminate undesirable redundant information for reducing complexity during further processing steps . The filtered data is fed into a primitive extractor where extraction of primitives is done . Syntactic methods are employed for the classification of the character . A decision tree is used for the interaction of the various components in the scheme . 1ike the primitive extraction and character recognition. A character is recognized by the primitive by primitive construction of its description . Openended inventories are used for including variants of the characters and also adding new members to the general class . Computer implementation of the proposal is discussed at the end using handwritten character samples . Results are analyzed and suggestions for future studies are made. The advantages of the proposal are discussed in detail .
Resumo:
This is a Named Entity Based Question Answering System for Malayalam Language. Although a vast amount of information is available today in digital form, no effective information access mechanism exists to provide humans with convenient information access. Information Retrieval and Question Answering systems are the two mechanisms available now for information access. Information systems typically return a long list of documents in response to a user’s query which are to be skimmed by the user to determine whether they contain an answer. But a Question Answering System allows the user to state his/her information need as a natural language question and receives most appropriate answer in a word or a sentence or a paragraph. This system is based on Named Entity Tagging and Question Classification. Document tagging extracts useful information from the documents which will be used in finding the answer to the question. Question Classification extracts useful information from the question to determine the type of the question and the way in which the question is to be answered. Various Machine Learning methods are used to tag the documents. Rule-Based Approach is used for Question Classification. Malayalam belongs to the Dravidian family of languages and is one of the four major languages of this family. It is one of the 22 Scheduled Languages of India with official language status in the state of Kerala. It is spoken by 40 million people. Malayalam is a morphologically rich agglutinative language and relatively of free word order. Also Malayalam has a productive morphology that allows the creation of complex words which are often highly ambiguous. Document tagging tools such as Parts-of-Speech Tagger, Phrase Chunker, Named Entity Tagger, and Compound Word Splitter are developed as a part of this research work. No such tools were available for Malayalam language. Finite State Transducer, High Order Conditional Random Field, Artificial Immunity System Principles, and Support Vector Machines are the techniques used for the design of these document preprocessing tools. This research work describes how the Named Entity is used to represent the documents. Single sentence questions are used to test the system. Overall Precision and Recall obtained are 88.5% and 85.9% respectively. This work can be extended in several directions. The coverage of non-factoid questions can be increased and also it can be extended to include open domain applications. Reference Resolution and Word Sense Disambiguation techniques are suggested as the future enhancements
Resumo:
Malayalam is one of the 22 scheduled languages in India with more than 130 million speakers. This paper presents a report on the development of a speaker independent, continuous transcription system for Malayalam. The system employs Hidden Markov Model (HMM) for acoustic modeling and Mel Frequency Cepstral Coefficient (MFCC) for feature extraction. It is trained with 21 male and female speakers in the age group ranging from 20 to 40 years. The system obtained a word recognition accuracy of 87.4% and a sentence recognition accuracy of 84%, when tested with a set of continuous speech data.
Resumo:
Software Defined Radio (SDR) hardware platforms use parallel architectures. Current concepts of developing applications (such as WLAN) for these platforms are complex, because developers describe an application with hardware-specifics that are relevant to parallelism such as mapping and scheduling. To reduce this complexity, we have developed a new programming approach for SDR applications, called Virtual Radio Engine (VRE). VRE defines a language for describing applications, and a tool chain that consists of a compiler kernel and other tools (such as a code generator) to generate executables. The thesis presents this concept, as well as describes the language and the compiler kernel that have been developed by the author. The language is hardware-independent, i.e., developers describe tasks and dependencies between them. The compiler kernel performs automatic parallelization, i.e., it is capable of transforming a hardware-independent program into a hardware-specific program by solving hardware-specifics, in particular mapping, scheduling and synchronizations. Thus, VRE simplifies programming tasks as developers do not solve hardware-specifics manually.
Resumo:
Conceptual Information Systems provide a multi-dimensional conceptually structured view on data stored in relational databases. On restricting the expressiveness of the retrieval language, they allow the visualization of sets of realted queries in conceptual hierarchies, hence supporting the search of something one does not have a precise description, but only a vague idea of. Information Retrieval is considered as the process of finding specific objects (documents etc.) out of a large set of objects which fit to some description. In some data analysis and knowledge discovery applications, the dual task is of interest: The analyst needs to determine, for a subset of objects, a description for this subset. In this paper we discuss how Conceptual Information Systems can be extended to support also the second task.
Resumo:
Cooperative behaviour of agents within highly dynamic and nondeterministic domains is an active field of research. In particular establishing highly responsive teamwork, where agents are able to react on dynamic changes in the environment while facing unreliable communication and sensory noise, is an open problem. Moreover, modelling such responsive, cooperative behaviour is difficult. In this work, we specify a novel model for cooperative behaviour geared towards highly dynamic domains. In our approach, agents estimate each other’s decision and correct these estimations once they receive contradictory information. We aim at a comprehensive approach for agent teamwork featuring intuitive modelling capabilities for multi-agent activities, abstractions over activities and agents, and a clear operational semantic for the new model. This work encompasses a complete specification of the new language, ALICA.
Resumo:
Ontologies have been established for knowledge sharing and are widely used as a means for conceptually structuring domains of interest. With the growing usage of ontologies, the problem of overlapping knowledge in a common domain becomes critical. In this short paper, we address two methods for merging ontologies based on Formal Concept Analysis: FCA-Merge and ONTEX. --- FCA-Merge is a method for merging ontologies following a bottom-up approach which offers a structural description of the merging process. The method is guided by application-specific instances of the given source ontologies. We apply techniques from natural language processing and formal concept analysis to derive a lattice of concepts as a structural result of FCA-Merge. The generated result is then explored and transformed into the merged ontology with human interaction. --- ONTEX is a method for systematically structuring the top-down level of ontologies. It is based on an interactive, top-down- knowledge acquisition process, which assures that the knowledge engineer considers all possible cases while avoiding redundant acquisition. The method is suited especially for creating/merging the top part(s) of the ontologies, where high accuracy is required, and for supporting the merging of two (or more) ontologies on that level.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.