895 resultados para Simplified text.
Resumo:
ACR is supported by a research grant from CNPq.
Resumo:
Background: The integration of sequencing and gene interaction data and subsequent generation of pathways and networks contained in databases such as KEGG Pathway is essential for the comprehension of complex biological processes. We noticed the absence of a chart or pathway describing the well-studied preimplantation development stages; furthermore, not all genes involved in the process have entries in KEGG Orthology, important information for knowledge application with relation to other organisms. Results: In this work we sought to develop the regulatory pathway for the preimplantation development stage using text-mining tools such as Medline Ranker and PESCADOR to reveal biointeractions among the genes involved in this process. The genes present in the resulting pathway were also used as seeds for software developed by our group called SeedServer to create clusters of homologous genes. These homologues allowed the determination of the last common ancestor for each gene and revealed that the preimplantation development pathway consists of a conserved ancient core of genes with the addition of modern elements. Conclusions: The generation of regulatory pathways through text-mining tools allows the integration of data generated by several studies for a more complete visualization of complex biological processes. Using the genes in this pathway as “seeds” for the generation of clusters of homologues, the pathway can be visualized for other organisms. The clustering of homologous genes together with determination of the ancestry leads to a better understanding of the evolution of such process.
Resumo:
The new Brazilian ABNT NBR 15575 Standard (the ―Standard‖) recommends two methods for analyzing housing thermal performance: a simplified and a computational simulation method. The aim of this paper is to evaluate both methods and the coherence between each. For this, the thermal performance of a low-cost single-family house was evaluated through the application of the procedures prescribed by the Standard. To accomplish this study, the EnergyPlus software was selected. Comparative analyses of the house with varying envelope U-values and solar absorptance of external walls were performed in order to evaluate the influence of these parameters on the results. The results have shown limitations in the current Standard computational simulation method, due to different aspects: weather files, lack of consideration of passive strategies, and inconsistency with the simplified method. Therefore, this research indicates that there are some aspects to be improved in this Standard, so it could better represent the real thermal performance of social housing in Brazil.
Resumo:
Two types of mesoscale wind-speed jet and their effects on boundary-layer structure were studied. The first is a coastal jet off the northern California coast, and the second is a katabatic jet over Vatnajökull, Iceland. Coastal regions are highly populated, and studies of coastal meteorology are of general interest for environmental protection, fishing industry, and for air and sea transportation. Not so many people live in direct contact with glaciers but properties of katabatic flows are important for understanding glacier response to climatic changes. Hence, the two jets can potentially influence a vast number of people. Flow response to terrain forcing, transient behavior in time and space, and adherence to simplified theoretical models were examined. The turbulence structure in these stably stratified boundary layers was also investigated. Numerical modeling is the main tool in this thesis; observations are used primarily to ensure a realistic model behavior. Simple shallow-water theory provides a useful framework for analyzing high-velocity flows along mountainous coastlines, but for an unexpected reason. Waves are trapped in the inversion by the curvature of the wind-speed profile, rather than by an infinite stability in the inversion separating two neutral layers, as assumed in the theory. In the absence of blocking terrain, observations of steady-state supercritical flows are not likely, due to the diurnal variation of flow criticality. In many simplified models, non-local processes are neglected. In the flows studied here, we showed that this is not always a valid approximation. Discrepancies between simulated katabatic flow and that predicted by an analytical model are hypothesized to be due to non-local effects, such as surface inhomogeneity and slope geometry, neglected in the theory. On a different scale, a reason for variations in the shape of local similarity scaling functions between studies is suggested to be differences in non-local contributions to the velocity variance budgets.
Resumo:
Visual signals, used for communication both within and between species, vary immensely in the forms that they take. How is it that all this splendour has evolved in nature? Since it is the receiver’s preferences that cause selective pressures on signals, elucidating the mechanism behind the response of the signal receiver is vital to gain a closer understanding of the evolutionary process. In my thesis I have therefore investigated how receivers, represented by chickens, Gallus gallus domesticus, respond to different stimuli displayed on a peck-sensitive computer screen. According to the receiver bias hypothesis, animals and humans often express biases when responding to certain stimuli. These biases develop as by-products of how the recognition mechanism categorises and discriminates between stimuli. Since biases are generated from general stimulus processing mechanisms, they occur irrespective of species and type of signal, and it is often possible to predict the direction and intensity of the biases. One of the results from the experiments in my thesis demonstrates that similar experience in different species may generate similar biases. By giving chickens at least some of the experience of human faces as humans presumably have, the chickens subsequently expressed preferences for the same faces as a group of human subjects. Another kind of experience generated a bias for symmetry. This bias developed in the context of training chickens to recognise two mirror images of an asymmetrical stimulus. Untrained chickens and chickens trained on only one of the mirror images expressed no symmetry preferences. The bias produced by the training regime was for a specific symmetrical stimulus which had a strong resemblance to the familiar asymmetrical exemplar, rather than a general preference for symmetry. A further kind of experience, training chickens to respond to some stimuli but not to others, generated a receiver bias for exaggerated stimuli, whereas chickens trained on reversed stimuli developed a bias for less exaggerated stimuli. To investigate the potential of this bias to drive the evolution of signals towards exaggerated forms, a simplified evolutionary process was mimicked. The stimuli variants rejected by the chickens were eliminated, whereas the selected forms were kept and evolved prior to the subsequent display. As a result, signals evolved into exaggerated forms in all tested stimulus dimensions: length, intensity and area, despite the inclusion of a cost to the sender for using increasingly exaggerated signals. The bias was especially strong and persistent for stimuli varying along the intensity dimension where it remained despite extensive training. All the results in my thesis may be predicted by the receiver bias hypothesis. This implies that biases, developed due to stimuli experience, may be significant mechanisms driving the evolution of signal form.
Resumo:
The need for a convergence between semi-structured data management and Information Retrieval techniques is manifest to the scientific community. In order to fulfil this growing request, W3C has recently proposed XQuery Full Text, an IR-oriented extension of XQuery. However, the issue of query optimization requires the study of important properties like query equivalence and containment; to this aim, a formal representation of document and queries is needed. The goal of this thesis is to establish such formal background. We define a data model for XML documents and propose an algebra able to represent most of XQuery Full-Text expressions. We show how an XQuery Full-Text expression can be translated into an algebraic expression and how an algebraic expression can be optimized.
Resumo:
[ES]En este trabajo proponemos un análisis narratológico del cuento Funes el Memorioso, de Jorge Luis Borges. A partir del estudio de la configuración de los subcódigos literarios, tanto en el nivel de la historia como del discurso, se propone una semántica del texto en el marco de los intereses narrativos del autor. Para el hombre normal, vivir consiste en forjarse un mundo simplificado; el hombre que no se conforma con esta “normalidad”, como el protagonista del relato, cuya memoria de los detalles le impide abstraer conceptos, incurre en hybris y está abocado a su autodestrucción.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.