29 resultados para Natural language techniques, Semantic spaces, Random projection, Documents
Resumo:
RESUMO: O uso de ratinhos transgénicos em neurociências aumentou consideravelmente nos últimos anos devido ao crescente interesse em compreender o cérebro e a necessidade de solucionar situações clínicas do foro neurológico e psiquiátrico. Para esse efeito, diferentes métodos de produção de animais transgénicos têm sido testados. O objectivo desta tese foi comparar métodos de integração aleatória de um transgene no genoma de ratinhos em termos de eficiência, estabilidade da integração do transgene, número de animais e de horas de trabalho necessárias para cada método. Assim, foi comparado o método mais utilizado - microinjecção pronuclear (PNMI) - com duas outras técnicas cujo desempenho foi considerado promissor – a transferência génica através dos testículos por electroporação e transfecção por lentivírus in vivo. As três técnicas foram realizadas usando um gene repórter sob o controlo de um promotor constitutivo, e depois reproduzidas usando um gene de interesse de modo a permitir obtenção de um animal capaz de ser usado em experimentação laboratorial. O transgene de interesse utilizado codifica uma proteína de fusão correspondendo a uma variante da rodopsina (channelrhodopsin) fundida à proteína enhanced yellow fluorescente protein ((EYFP) resultando num produto designado ChR2-EYFP. Este animal transgénico apresentaria expressão deste canal iónico apenas em células dopaminergicas, o que, com manipulação optogenética, tornaria possivel a activação especifica deste grupo de neurónios e, simultaneamente, a observação do impacto desta manipulação no comportamento num animal em livre movimento. Estas ferramentas são importantes na investigação básica em neurociências pois ajudam a esclarecer o papel de grupos específicos de neurónios e compreender doenças como a doença de Parkinson ou a esquizofrenia onde a função de certos tipos de neurónios de encontra alterada. Quando comparados os três métodos realizados verifica-se que usando um gene repórter PMNI resulta em 31,3% de, a de animais transgénicos obtidos, a electroporação de testículos em 0% e a injecção de lentivírus em 0%. Quando usado o gene de interesse, os resultados obtidos são, respectivamente, 18,8%, 63,9% e 0%.--------------ABSTRACT: The use of transgenic mice is increasing in all fields of research, particularly in neuroscience, due to the widespread need of animal models to solve neurological and psychiatric medical conditions. Different methodologies have been tested in the last decades in order to produce such transgenic animals. The ultimate goal of this thesis is to compare different methods of random integration of a transgene in the genome of mice in terms of efficiency, stability of the transgene integration, number of animals required and the labour intensity of each technique. We compared the most used method – pronuclear microinjection (PNMI) – with two other promising techniques – Testis Mediated Gene Transfer (TMGT) by electroporation and in vivo lentiviral transfection. The three techniques were performed using a reporter gene – green fluorescent protein (GFP), whose transcription was driven by the constitutive cytomegalovirus (CMV) promoter. These three techniques were later reproduced using the tyrosine hydroxylase promoter (TH) and the neuronal manipulator, channelrhodopsin-2 fused to the enhanced yellow fluorescent reporter protein (ChR2-EYFP). The transgenic animal we sough to produce would express the light driven channel only in dopaminergic cells, making possible to specifically activate this group of neurons, while simultaneously observe the behaviour in a freely moving animal. This is a very important tool in basic neuroscience research since it helps to clarify the role of specific groups of neurons, map circuits in the brain, and consequently understand neurological diseases such as Parkinson’s disease or schizophrenia, where the function of certain types of neurons is affected. When comparing the three methods, it was verified that using a reporter gene PNMI resulted in 31.3% of transgenic mice obtained, testis electroporation in 0% and lentiviral injection in 0%. When using the gene of interest, the results obtained were, respectively, 18.8%, 63.9% and 0%.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Construction and Building Materials 51 (2014) 287–294
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Mesoamerican cultures had a strong tradition of written and pictorial manuscripts, called the codices. In studies already performed it was found the use of Maya Blue, made from a mixture of indigo and a clay called palygorskite, forming an incredibly stable material where the dye is trapped inside the nanotubes of the clay, after heating. However, a bigger challenge lies in the study of the yellows used, for these civilizations might have used this clay-dye mixture to produce their yellow colorants. As a first step, it was possible to provide identification, by non-invasive methods, of two colorants (a flavonoid and a carotenoid). While the flavonoid absorbed between 368-379 nm, the carotenoid would absorb around 455 nm. A temperature study also conducted allowed to set 140ºC as the desirable temperature to heat the samples without degrading them. FT-IR, conventional Raman and SERS allowed us to understand the existence of a reaction between the dyes and the clays (palygorskite and kaolinite), however it is difficult to understand it in a molecular point of view. As a second step, five species of Mexican dyes were selected on the basis of historical sources. The Maya yellow samples were produced adapting the recipe proposed by Reyes-Valerio, supporting the yellow dyes extracted from the dried plants on the clays, with addition of water, and then heated at 140ºC. It was found that the addition of water in palygorskite would increase the pH, hence deprotonating the molecules having a clear negative effect in the color. A second recipe was developed, without the addition of water; however, it was found that the use of water based binders would still alter the color of the samples with palygorskite. In this case, kaolinite without heating yield better results as a Maya yellow hybrid. It was found that the Maya chemistry might not have been the same for all the colors. The Mesoamericans might have found that different dyes could work better to their desires if matched with different clays. It was noticeable that for a clear distinction between flavonoids and carotenoids the reflectance and emission studies suffice, but when clay is added, Raman techniques will perform better. For this reason, conventional Raman and SERS were employed in order to create a database for the Mesoamerican dyestuffs for a future identification.
Resumo:
This thesis introduces a novel conceptual framework to support the creation of knowledge representations based on enriched Semantic Vectors, using the classical vector space model approach extended with ontological support. One of the primary research challenges addressed here relates to the process of formalization and representation of document contents, where most existing approaches are limited and only take into account the explicit, word-based information in the document. This research explores how traditional knowledge representations can be enriched through incorporation of implicit information derived from the complex relationships (semantic associations) modelled by domain ontologies with the addition of information presented in documents. The relevant achievements pursued by this thesis are the following: (i) conceptualization of a model that enables the semantic enrichment of knowledge sources supported by domain experts; (ii) development of a method for extending the traditional vector space, using domain ontologies; (iii) development of a method to support ontology learning, based on the discovery of new ontological relations expressed in non-structured information sources; (iv) development of a process to evaluate the semantic enrichment; (v) implementation of a proof-of-concept, named SENSE (Semantic Enrichment kNowledge SourcEs), which enables to validate the ideas established under the scope of this thesis; (vi) publication of several scientific articles and the support to 4 master dissertations carried out by the department of Electrical and Computer Engineering from FCT/UNL. It is worth mentioning that the work developed under the semantic referential covered by this thesis has reused relevant achievements within the scope of research European projects, in order to address approaches which are considered scientifically sound and coherent and avoid “reinventing the wheel”.
Resumo:
The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.
Resumo:
Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar solutions to this proposal do exist, however, those solutions are scarce and limited. For this reason, the proposed solution is composed of a natural user interface that is intended to create a new concept on this field. The validation of this work, consisted on the implementation of a serious game prototype, which can be used as a source for learning (Portuguese) sign language. On this validation, it was first implemented a module responsible for recognizing sign language. This first stage, allowed the increase of interaction and the construction of an algorithm capable of accurately recognizing sign language. On a second stage of the validation, the proposal was studied so that the pros and cons can be determined and considered on future works.
Resumo:
Currently, it is widely perceived among the English as a Foreign Language (EFL) teaching professionals, that motivation is a central factor for success in language learning. This work aims to examine and raise teachers’ awareness about the role of assessment and feedback in the process of language teaching and learning at polytechnic school in Benguela to develop and/or enhance their students’ motivation for learning. Hence the paper defines and discusses the key terms and, the techniques and strategies for an effective feedback provision in the context under study. It also collects data through the use of interview and questionnaire methods, and suggests the assessment and feedback types to be implemented at polytechnic school in Benguela
Resumo:
This research addresses the problem of creating interactive experiences to encourage people to explore spaces. Besides the obvious spaces to visit, such as museums or art galleries, spaces that people visit can be, for example, a supermarket or a restaurant. As technology evolves, people become more demanding in the way they use it and expect better forms of interaction with the space that surrounds them. Interaction with the space allows information to be transmitted to the visitors in a friendly way, leading visitors to explore it and gain knowledge. Systems to provide better experiences while exploring spaces demand hardware and software that is not in the reach of every space owner either because of the cost or inconvenience of the installation, that can damage artefacts or the space environment. We propose a system adaptable to the spaces, that uses a video camera network and a wi-fi network present at the space (or that can be installed) to provide means to support interactive experiences using the visitor’s mobile device. The system is composed of an infrastructure (called vuSpot), a language grammar used to describe interactions at a space (called XploreDescription), a visual tool used to design interactive experiences (called XploreBuilder) and a tool used to create interactive experiences (called urSpace). By using XploreBuilder, a tool built of top of vuSpot, a user with little or no experience in programming can define a space and design interactive experiences. This tool generates a description of the space and of the interactions at that space (that complies with the XploreDescription grammar). These descriptions can be given to urSpace, another tool built of top of vuSpot, that creates the interactive experience application. With this system we explore new forms of interaction and use mobile devices and pico projectors to deliver additional information to the users leading to the creation of interactive experiences. The several components are presented as well as the results of the respective user tests, which were positive. The design and implementation becomes cheaper, faster, more flexible and, since it does not depend on the knowledge of a programming language, accessible for the general public.
Resumo:
Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.