798 resultados para Resources texts
Resumo:
Cet article présente les premiers résultats d’une recherche s’intéressant à l’articulation entre imitation et invention dans l’écriture chez les jeunes scripteurs. L’étude s’attache à observer les modes de récupération de ressources textuelles fournies par les conditions de production. L’expérimentation a été conçue pour observer l’appropriation d’un genre. Des dispositifs didactiques ont été proposés à plusieurs classes de fin d’école primaire française, à partir de textes littéraires issus de la robinsonnade mis à disposition soit simultanément à l’acte d’écriture soit lors d’une seconde écriture. L’étude montre comment les élèves ont recours à deux procédés contrastés : le réinvestissement du lexique et la reformulation. Les données recueillies mettent en évidence la reprise attendue de mots caractéristiques du genre, et révèlent l’ingéniosité des scripteurs pour restructurer des matériaux langagiers. Certaines stratégies témoignent des difficultés rencontrées par les élèves qui ont eu à interpréter le lexique littéraire puis à le transférer dans leur propre récit. Des modes de reformulation différents coexistent dont on peut offrir une première catégorisation en fonction d’une appropriation plus ou moins réussie du genre considéré.
Resumo:
This research investigates the phenomenon of translationese in two monolingual comparable corpora of original and translated Catalan texts. Translationese has been defined as the dialect, sub-language or code of translated language. This study aims at giving empirical evidence of translation universals regardless the source language.Traditionally, research conducted on translation strategies has been mainly intuition-based. Computational Linguistics and Natural Language Processing techniques provide reliable information of lexical frequencies, morphological and syntactical distribution in corpora. Therefore, they have been applied to observe which translation strategies occur in these corpora.Results seem to prove the simplification, interference and explicitation hypotheses, whereas no sign of normalization has been detected with the methodology used.The data collected and the resources created for identifying lexical, morphological and syntactic patterns of translations can be useful for Translation Studies teachers, scholars and students: teachers will have more tools to help students avoid the reproduction of translationese patterns. Resources developed will help in detecting non-genuine or inadequate structures in the target language. This fact may imply an improvement in stylistic quality in translations. Translation professionals can also take advantage of these resources to improve their translation quality.
Resumo:
The tactile cartography is an area of Cartography that aims the development of methodologies and didactical material to work cartographic concepts with blind and low vision people. The main aim of this article is to present the experience of Tactile Cartography Research Group from Sao Paulo State University (UNESP), including some didactical material and courses for teachers using the System MAPAVOX. The System MAPAVOX is software developed by our research group in a partnership with Federal University of Rio de Janeiro (UFRJ) that integrates maps and models with a voice synthesizer, sound emission, texts, images and video visualizing for computers. Our research methodology is based in authors that have in the students the centre of didactical activity such as Ochaita and Espinosa in [1], which developed studies related to blind children's literacy. According to Almeida the child's drawing is, thus, a system of representation. It isn't a copy of objects, but interpretation of that which is real, done by the child in graphic language[2]. In the proposed activities with blind and low vision students they are prepared to interpret reality and represent it by adopting concepts of graphic language learned. To start the cartographic initialization it is necessary to use personal and quotidian references, for example the classroom tactile model or map, to include concepts in generalization and scale concerning to their space of life. During these years many case studies were developed with blind and low vision students from Special School for Hearing Impaired and Visually Impaired in Araras and Rio Claro, Sao Paulo - Brazil. The most part of these experiences and others from Brazil and Chile are presented in [3]. Tactile material and MAPAVOX facilities are analysed by students and teachers who contribute with suggestions to reformulate and adapt them to their sensibility and necessity. Since 2005 we offer courses in Tactile Cartography to prepare teachers from elementary school in the manipulation of didactical material and attending students with special educational needs in regular classroom. There were 6 classroom and blended courses offered for 184 teachers from public schools in this region of the Sao Paulo state. As conclusion we can observe that methodological procedures centred in the blind and low vision students are successful in their spatial orientation if use didactical material from places or objects with which they have significant experience. During the applying of courses for teachers we could see that interdisciplinary groups can find creative cartographic alternatives more easily. We observed too that the best results in methodological procedures were those who provided concreteness to abstract concepts using daily experiences.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
The use of ontologies as representations of knowledge is widespread but their construction, until recently, has been entirely manual. We argue in this paper for the use of text corpora and automated natural language processing methods for the construction of ontologies. We delineate the challenges and present criteria for the selection of appropriate methods. We distinguish three ma jor steps in ontology building: associating terms, constructing hierarchies and labelling relations. A number of methods are presented for these purposes but we conclude that the issue of data-sparsity still is a ma jor challenge. We argue for the use of resources external tot he domain specific corpus.
Resumo:
Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.
Resumo:
Religious authority figures often use religious texts as the primary basis for censuring homosexuality. In recent years, however, non-heterosexual Christians and Muslims have begun to contest the discursively produced boundary of sexual morality.Drawing upon two research projects on non-heterosexual Christians and Muslims, this article explores the three approaches embedded in this strategy.While acknowledging that homosexuality is indeed portrayed negatively in some parts of religious texts, the participants critique traditional hermeneutics by highlighting its inaccuracy and socio-cultural specificity, and arguing for a contextualized and culturally relevant interpretation. They also critique the credibility of institutional interpretive authority by highlighting its inadequacy and ideology, and relocating authentic interpretive authority to personal experience. Finally, they recast religious texts to construct resources for their spiritual nourishment.This strategy generally reflects the contemporary western religious landscape that prioritizes the authority of the self over that of religious institution.
Resumo:
This article presents the results of a systematic critical review of interdisciplinary literature concerned with digital text (or e-text) uses in education and proposes recommendations for how e-texts can be implemented for impactful learning. A variety of e-texts can be found in the repertoire of educational resources accessible to students, and in the constantly changing terrain of educational technologies, they are rapidly evolving, presenting new opportunities and affordances for student learning. We highlight some of the ways in which academic studies have examined e-texts as part of teaching and learning practices, placing a particular emphasis on aspects of learning such as recall, comprehension, retention of information and feedback. We also review diverse practices associated with uses of e-text tools such as note-taking, annotation, bookmarking, hypertexts and highlighting. We argue that evidence-based studies into e-texts are overwhelmingly structured around reinforcing the existing dichotomy pitting print-based (‘traditional’) texts against e-texts. In this article, we query this approach and instead propose to focus on factors such as students’ level of awareness of their options in accessing learning materials and whether they are instructed and trained in how to take full advantage of the capabilities of e-texts, both of which have been found to affect learning performance.
Resumo:
In the work, the in vitro antiproliferative activity of a series of synthetic fatty acid amides were investigated in seven cancer cell lines. The study revealed that most of the compounds showed antiproliferative activity against tested tumor cell lines, mainly on human glioma cells (U251) and human ovarian cancer cells with a multiple drug-resistant phenotype (NCI-ADR/RES). In addition, the fatty methyl benzylamide derived from ricinoleic acid (with the fatty acid obtained from castor oil, a renewable resource) showed a high selectivity with potent growth inhibition and cell death for the glioma cell line-the most aggressive CNS cancer.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
The water-wind crisscross region of the Loess Plateau in China is comprised of 17.8 million hectares of highly erodible soil under limited annual rainfall. This requires a sustainable water balance for the restoration of dryland ecosystems to reduce and manage soil erosion. In this region, alfalfa has been one of the main legumes grown to minimize soil erosion. However, alfalfa yields were significantly lower in years of reduced rainfall suggesting that high water use and deep rooting alfalfa make it an unsustainable crop due to the long-term decline in soil water storage and productivity. Our objectives in this Study were to evaluate the soil water balance of Loess Plateau soils during vegetative restoration and to evaluate practices that prevent soil desiccation and promote ecosystem restoration and sustainability. Field observations of soil moisture recovery and soil erosion were carried out for five years after alfalfa was replaced with different crops and with bare soil. Soil water content changes in cropland, rangeland, and bare soil were tracked over several years, using a water balance approach. Results indicate that growing forages significantly reduced runoff and sediment transport. A forage-food-crop rotation is a better choice than other cropping systems for achieving sustainable productivity and preventing soil erosion and desiccation. However, economic considerations have prevented its widespread adoption by local farmers. Alternatively, this study recommends consideration of grassland crops or forest ecosystems to provide a sustainable water balance in the Loess Plateau of China. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This article documents the addition of 229 microsatellite marker loci to the Molecular Ecology Resources Database. Loci were developed for the following species: Acacia auriculiformis x Acacia mangium hybrid, Alabama argillacea, Anoplopoma fimbria, Aplochiton zebra, Brevicoryne brassicae, Bruguiera gymnorhiza, Bucorvus leadbeateri, Delphacodes detecta, Tumidagena minuta, Dictyostelium giganteum, Echinogammarus berilloni, Epimedium sagittatum, Fraxinus excelsior, Labeo chrysophekadion, Oncorhynchus clarki lewisi, Paratrechina longicornis, Phaeocystis antarctica, Pinus roxburghii and Potamilus capax. These loci were cross-tested on the following species: Acacia peregrinalis, Acacia crassicarpa, Bruguiera cylindrica, Delphacodes detecta, Tumidagena minuta, Dictyostelium macrocephalum, Dictyostelium discoideum, Dictyostelium purpureum, Dictyostelium mucoroides, Dictyostelium rosarium, Polysphondylium pallidum, Epimedium brevicornum, Epimedium koreanum, Epimedium pubescens, Epimedium wushanese and Fraxinus angustifolia.
Resumo:
The Piracicaba, Capivari, and Jundiai River Basins (RB-PCJ) are mainly located in the State of So Paulo, Brazil. Using a dynamics systems simulation model (WRM-PCJ) to assess water resources sustainability, five 50-year simulations were run. WRM-PCJ was developed as a tool to aid decision and policy makers on the RB-PCJ Watershed Committee. The model has 254 variables. The model was calibrated and validated using available information from the 80s. Falkenmark Water Stress Index went from 1,403 m(3) person (-aEuro parts per thousand 1) year (-aEuro parts per thousand 1) in 2004 to 734 m(3) P (-aEuro parts per thousand 1) year (-aEuro parts per thousand 1) in 2054, and Xu Sustainability Index from 0.44 to 0.20. In 2004, the Keller River Basin Development Phase was Conservation, and by 2054 was Augmentation. The three criteria used to evaluate water resources showed that the watershed is at crucial water resources management turning point. The WRM-PCJ performed well, and it proved to be an excellent tool for decision and policy makers at RB-PCJ.
Resumo:
Using a dynamic systems model specifically developed for Piracicaba, Capivari and Jundia River Water Basins (BH-PCJ) as a tool to help to analyze water resources management alternatives for policy makers and decision takers, five simulations for 50 years timeframe were performed. The model estimates water supply and demand, as well as wastewater generation from the consumers at BH-PCJ. A run was performed using mean precipitation value constant, and keeping the actual water supply and demand rates, the business as usual scenario. Under these considerations, it is expected an increment of about similar to 76% on water demand, that similar to 39% of available water volume will come from wastewater reuse, and that waste load increases to similar to 91%. Falkenmark Index will change from 1,403 m(3) person(-1) year(-1) in 2004, to 734 m(3) P(-1) year(-1) by 2054, and the Sustainability Index from 0.44 to 0.20. Another four simulations were performed by affecting the annual precipitation by 90 and 110%; considering an ecological flow equal to 30% of the mean daily flow; and keeping the same rates for all other factors except for ecological flow and household water consumption. All of them showed a tendency to a water crisis in the near future at BH-PCJ.