959 resultados para VHDL (Computer hardware description language)
Resumo:
Long-range non-covalent interactions play a key role in the chemistry of natural polyphenols. We have previously proposed a description of supramolecular polyphenol complexes by the B3P86 density functional coupled with some corrections for dispersion. We couple here the B3P86 functional with the D3 correction for dispersion, assessing systematically the accuracy of the new B3P86-D3 model using for that the well-known S66, HB23, NCCE31, and S12L datasets for non-covalent interactions. Furthermore, the association energies of these complexes were carefully compared to those obtained by other dispersion-corrected functionals, such as B(3)LYP-D3, BP86-D3 or B3P86-NL. Finally, this set of models were also applied to a database composed of seven non-covalent polyphenol complexes of the most interest.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Artes, Programa de Pós-Graduação em Arte, 2016.
Resumo:
En 1526, Hassan El Wazzan / Jean-Léon l’Africain, achève à Rome la rédaction en italien du manuscrit du Libro della Cosmographia Dell’Africa, œuvre majeure considérée à la Renaissance comme l’une des principales sources de connaissance du continent africain en Europe. En 1550, un savant vénitien du nom de Jean-Baptiste Ramusio publie le texte italien de Jean-Léon dans un recueil de récits de voyages. L’édition, intitulée Descrizione dell’Africa (Description de l’Afrique), diffère significativement du manuscrit original. Elle subit maintes modifications par Ramusio dont l’objectif est de livrer un ouvrage qui répond aux attentes des Européens et qui correspond à l’image que l’Occident chrétien se faisait du monde musulman. Cette version a servi de texte de départ aux nombreuses traductions qui ont suivi. La première traduction française, datant de 1556, est réalisée par Jean Temporal, éditeur et imprimeur lyonnais. La deuxième, parue en 1956 et rééditée en 1980, est l’œuvre d’Alexis Épaulard; elle s’appuie partiellement sur le manuscrit original, mais aussi sur la version imprimée de Ramusio. Notre travail consiste à confronter les deux traductions françaises à l’édition de Ramusio. Nous tenterons de démontrer que les deux traducteurs français sont lourdement intervenus dans le texte traduit, et ce afin de servir des desseins expansionnistes et colonialistes. Notre recherche met en évidence la prise de position des traducteurs et les idéologies qui affectent l’appréciation du livre. Pour ce faire, nous procédons à l’analyse des traductions au niveau textuel et au niveau paratextuel tout en mettant en évidence le contexte historique et politico-idéologique entourant la parution de ces deux traductions françaises. Nous consacrons une attention toute particulière au choix des mots, aux allusions et aux stratégies utilisées par les traducteurs et les éditeurs. Les travaux de Maria Tymoczko sur la traduction et l’engagement politique fournissent le cadre de référence théorique de cette recherche, tout autant que les textes d’Edward Said sur l’orientalisme et le postcolonialisme. Il ressort de cette recherche que ces traductions françaises sont empreintes d’une idéologie eurocentrée visant à conforter les ambitions hégémoniques en terre africaine.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Support Vector Machines (SVMs) are widely used classifiers for detecting physiological patterns in Human-Computer Interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the application of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables, and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.
Resumo:
En 1526, Hassan El Wazzan / Jean-Léon l’Africain, achève à Rome la rédaction en italien du manuscrit du Libro della Cosmographia Dell’Africa, œuvre majeure considérée à la Renaissance comme l’une des principales sources de connaissance du continent africain en Europe. En 1550, un savant vénitien du nom de Jean-Baptiste Ramusio publie le texte italien de Jean-Léon dans un recueil de récits de voyages. L’édition, intitulée Descrizione dell’Africa (Description de l’Afrique), diffère significativement du manuscrit original. Elle subit maintes modifications par Ramusio dont l’objectif est de livrer un ouvrage qui répond aux attentes des Européens et qui correspond à l’image que l’Occident chrétien se faisait du monde musulman. Cette version a servi de texte de départ aux nombreuses traductions qui ont suivi. La première traduction française, datant de 1556, est réalisée par Jean Temporal, éditeur et imprimeur lyonnais. La deuxième, parue en 1956 et rééditée en 1980, est l’œuvre d’Alexis Épaulard; elle s’appuie partiellement sur le manuscrit original, mais aussi sur la version imprimée de Ramusio. Notre travail consiste à confronter les deux traductions françaises à l’édition de Ramusio. Nous tenterons de démontrer que les deux traducteurs français sont lourdement intervenus dans le texte traduit, et ce afin de servir des desseins expansionnistes et colonialistes. Notre recherche met en évidence la prise de position des traducteurs et les idéologies qui affectent l’appréciation du livre. Pour ce faire, nous procédons à l’analyse des traductions au niveau textuel et au niveau paratextuel tout en mettant en évidence le contexte historique et politico-idéologique entourant la parution de ces deux traductions françaises. Nous consacrons une attention toute particulière au choix des mots, aux allusions et aux stratégies utilisées par les traducteurs et les éditeurs. Les travaux de Maria Tymoczko sur la traduction et l’engagement politique fournissent le cadre de référence théorique de cette recherche, tout autant que les textes d’Edward Said sur l’orientalisme et le postcolonialisme. Il ressort de cette recherche que ces traductions françaises sont empreintes d’une idéologie eurocentrée visant à conforter les ambitions hégémoniques en terre africaine.
Resumo:
This work is a description of Tajio, a Western Malayo-Polynesian language spoken in Central Sulawesi, Indonesia. It covers the essential aspects of Tajio grammar without being exhaustive. Tajio has a medium sized phoneme inventory consisting of twenty consonants and five vowels. The language does not have lexical (word) stress; rather, it has a phrasal accent. This phrasal accent regularly occurs on the penultimate syllable of an intonational phrase, rendering this syllable auditorily prominent through a pitch rise. Possible syllable structures in Tajio are (C)V(C). CVN structures are allowed as closed syllables, but CVN syllables in word-medial position are not frequent. As in other languages in the area, the only sequence of consonants allowed in native Tajio words are sequences of nasals followed by a homorganic obstruent. The homorganic nasal-obstruent sequences found in Tajio can occur word-initially and word-medially but never in word-final position. As in many Austronesian languages, word class classification in Tajio is not straightforward. The classification of words in Tajio must be carried out on two levels: the morphosyntactic level and the lexical level. The open word classes in Tajio consist of nouns and verbs. Verbs are further divided into intransitive verbs (dynamic intransitive verbs and statives) and dynamic transitive verbs. Based on their morphological potential, lexical roots in Tajio fall into three classes: single-class roots, dual-class roots and multi-class roots. There are two basic transitive constructions in Tajio: Actor Voice and Undergoer Voice, where the actor or undergoer argument respectively serves as subjects. It shares many characteristics with symmetrical voice languages, yet it is not fully symmetric, as arguments in AV and UV are not equally marked. Neither subjects nor objects are marked in AV constructions. In UV constructions, however, subjects are unmarked while objects are marked either by prefixation or clitization. Evidence from relativization, control and raising constructions supports the analysis that AV and UV are in fact transitive, with subject arguments and object arguments behaving alike in both voices. Only the subject can be relativized, controlled, raised or function as the implicit subject of subjectless adverbial clauses. In contrast, the objects of AV and UV constructions do not exhibit these features. Tajio is a predominantly head-marking language with basic A-V-O constituent order. V and O form a constituent, and the subject can either precede or follow this complex. Thus, basic word order is S-V-O or V-O-S. Subject, as well as non-subject arguments, may be omitted when contextually specified. Verbs are marked for voice and mood, the latter of which is is obligatory. The two values distinguished are realis and non-realis. Depending on the type of predicate involved in clause formation, three clause types can be distinguished: verbal clauses, existential clauses and non-verbal clauses. Tajio has a small number of multi-verbal structures that appear to qualify as serial verb constructions. SVCs in Tajio always include a motion verb or a directional.
Resumo:
A large percentage of Vanier College's technology students do not attain their College degrees within the scheduled three years of their program. A closer investigation of the problem revealed that in many of these cases these students had completed all of their program professional courses but they had not completed all of the required English and/or Humanities courses. Fortunately, most of these students do extend their stay at the college for the one or more semesters required for graduation, although some choose to go on into the workforce without returning to complete the missing English and/or Humanities and without their College Degrees. The purpose of this research was to discover if there was any significant measure of association between a student's family linguistic background, family cultural background, high school average, and/or College English Placement Test results and his or her likelihood of succeeding in his or her English and/or Humanities courses within the scheduled three years of the program. Because of both demographic differences between 'hard' and 'soft' technologies, including student population, more specifically gender ratios and student average ages in specific programs; and program differences, including program writing requirements and types of practical skill activities required; in order to have a more uniform sample, the research was limited to the hard technologies where students work hands-on with hardware and/or computers and tend to have overall low research and writing requirements. Based on a review of current literature and observations made in one of the hard technology programs at Vanier College, eight research questions were developed. These questions were designed to examine different aspects of success in the English and Humanities courses such as failure and completion rates and the number of courses remaining after the end of the fifth semester and as well examine how the students assessed their ability to communicate in English. The eight research questions were broken down into a total of 54 hypotheses. The high number of hypotheses was required to address a total of seven independent variables: primary home language, high school language of instruction, student's place of birth (Canada, Not-Canada), student's parents' place of birth (Both-born-in-Canada, Not-both-born-in-Canada), high school averages and English placement level (as a result of the College English Entry Test); and eleven dependent variables: number of English completed, number of English failed, whether all English were completed by the end of the 5th semester (yes, no), number of Humanities courses completed, number of Humanities courses failed, whether all the Humanities courses were completed by the end of the 5th semester (yes, no), the total number of English and Humanities courses left, and the students' assessments of their ability to speak, read and write in English. The data required to address the hypotheses were collected from two sources, from the students themselves and from the College. Fifth and sixth semester students from Building Engineering Systems, Computer and Digital Systems, Computer Science and Industrial Electronics Technology Programs were surveyed to collect personal information including family cultural and linguistic history and current language usages, high school language of instruction, perceived fluency in speaking, reading and writing in English and perceived difficulty in completing English and Humanities courses. The College was able to provide current academic information on each of the students, including copies of college program planners and transcripts, and high school transcripts for students who attended a high school in Quebec. Quantitative analyses were done on the data using the SPSS statistical analysis program. Of the fifty-four hypotheses analysed, in fourteen cases the results supported the research hypotheses, in the forty other cases the null hypotheses had to be accepted. One of the findings was that there was a strong significant association between a student's primary home language and place of birth and his or her perception of his or her ability to communicate in English (speak, read, and write) signifying that both students whose primary home language was not English and students who were not born in Canada, considered themselves, on average, to be weaker in these skills than did students whose primary home language was English. Although this finding was noteworthy, the two most significant findings were the association found between a student's English entry placement level and the number of English courses failed and the association between the parents' place of birth and the student's likelihood of succeeding in both his or her English and Humanities courses. According to the research results, the mean number of English courses failed, on average, by students placed in the lowest entry level of College English was significantly different from the number of English courses failed by students placed in any of the other entry level English courses. In this sample students who were placed in the lowest entry level of College English failed, on average, at least three times as many English courses as those placed in any of the other English entry level courses. These results are significant enough that they will be brought to the attention of the appropriate College administration. The results of this research also appeared to indicate that the most significant determining factor in a student's likelihood of completing his or her English and Humanities courses is his or her parents' place of birth (both-born-in-Canada or not-both-born-in-Canada). Students who had at least one parent who was not born in Canada, would, on average, fail a significantly higher number of English courses, be significantly more likely to still have at least one English course left to complete by the end of the 5th semester, fail a significantly higher number of Humanities courses, be significantly more likely to still have at least one Humanities course to complete by the end of the 5th semester and have significantly more combined English and Humanities courses to complete at the end of their 5th semester than students with both parents born in Canada. This strong association between students' parents' place of birth and their likelihood of succeeding in their English and Humanities courses within the three years of their program appears to indicate that acculturation may be a more significant factor than either language or high school averages, for which no significant association was found for any of the English and Humanities related dependent variables. Although the sample size for this research was only 60 students and more research needs to be conducted in this area, to see if these results are supported with other groups within the College, these results are still significant. If the College can identify, at admission, the students who will be more likely to have difficulty in completing their English and Humanities courses, the College will now have the opportunity to intercede during or before the first semester, and offer these students the support they require in order to increase their chances of success in their education, whether it be classes or courses designed to meet their specific needs, special mentoring, tutoring or other forms of support. With the necessary support, the identified students will have a greater opportunity of successfully completing their programs within the scheduled three years, while at the same time the College will have improved its capacity to meeting the needs of its students.
Resumo:
This paper presents a study made in a field poorly explored in the Portuguese language – modality and its automatic tagging. Our main goal was to find a set of attributes for the creation of automatic tag- gers with improved performance over the bag-of-words (bow) approach. The performance was measured using precision, recall and F1. Because it is a relatively unexplored field, the study covers the creation of the corpus (composed by eleven verbs), the use of a parser to extract syntac- tic and semantic information from the sentences and a machine learning approach to identify modality values. Based on three different sets of attributes – from trigger itself and the trigger’s path (from the parse tree) and context – the system creates a tagger for each verb achiev- ing (in almost every verb) an improvement in F1 when compared to the traditional bow approach.
Resumo:
In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.
Resumo:
One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.
Resumo:
This thesis provides a corpus-assisted pragmatic investigation of three Japanese expressions commonly signalled as apologetic, namely gomen, su(m)imasen and mōshiwake arimasen, which can be roughly translated in English with ‘(I’m) sorry’. The analysis is based on a web corpus of 306,670 tokens collected from the Q&A website Yahoo! Chiebukuro, which is examined combining quantitative (statistical) and qualitative (traditional close reading) methods. By adopting a form-to-function approach, the aim of the study is to shed light on three main topics of interest: the pragmatic functions of apology-like expressions, the discursive strategies they co-occur with, and the behaviours that warrant them. The overall findings reveal that apology-like expressions are multifunctional devices whose meanings extend well beyond ‘apology’ alone. These meanings are affected by a number of discursive strategies that can either increase or decrease the perceived (im)politeness level of the speech act to serve interactants’ face needs and communicative goals. The study also identifies a variety of behaviours that people frame as violations, not necessarily because they are actually face-threatening to the receiver, but because doing so is functional to the projection of the apologiser as a moral persona. An additional finding that emerged from the analysis is the pervasiveness of reflexive usages of apology-like expressions, which are often employed metadiscursively to convey, negotiate and challenge opinions on how language should be used. To conclude, the study provides a unique insight into the use of three expressions whose pragmatic meanings are more varied than anticipated. The findings reflect the use of (im)politeness in an online and non-Western context and, hopefully, represent a step towards a more inclusive notion of ‘apologies’ and related speech acts.
Resumo:
Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.
Resumo:
This article shows that the term functionalism, very often understood as a single or uniform approach in linguistics, has to be understood in its different perspectives. I start by presenting an opposing conception similar to the I-language vs E-language in Chomsky (1986). As in the latter conception , language can be understood as an abstract model of a mind internal mechanism responsible for language production and perception or, as in the former one, it can be the description of the external use of language. Also like with formalists , there are functionalists who look for cross-linguistic variation (and universals of language use) and functionalists who look for language internal variation. It is also shown that functionalists can differ in the extent to which social variables are considered in the explanation of linguistic form.