954 resultados para Computational tools
Resumo:
Blogging has become one of the key ingredients of the so-called socials networks. This phenomenon has indeed invaded the world of education. Connections between people, comments on each other posts, and assessment of innovation are usually interesting characteristics of blogs related to students and scholars. Blogs have become a kind of new form of authority, bringing about (divergent) discussions which lead to creation of knowledge. The use of blogs as an innovative, educational tool is not at all new. However, their use in universities is not very widespread yet. Blogging for personal affairs is rather commonplace, but blogging for professional affairs – teaching, research and service, is scarce, despite the availability of ready-to-use, free tools. Unfortunately, Information Society has not reached yet enough some universities: not only are (student) blogs scarcely used as an educational tool, but it is quite rare to find a blog written by University professors. The Institute of Computational Chemistry of the University of Girona and the Department of Chemistry of the Universitat Autònoma de Barcelona has joined forces to create “InnoCiència”, a new Group on Digital Science Communitation. This group, formed by ca. ten researchers, has promoted the use of blogs, twitters. wikis and other tools of Web 2.0 in activities in Catalonia concerning the dissemination of Science, like Science Week, Open Day or Researchers’ Night. Likewise, its members promote use of social networking tools in chemistry- and communication-related courses. This communication explains the outcome of social-network experiences with teaching undergraduate students and organizing research communication events. We provide live, hands-on examples and interactive ground to show how blogs and twitters can be used to enhance the yield of teaching and research. Impact of blogging and other social networking tools on the outcome of the learning process is very depending on the target audience and the environmental conditions. A few examples are provided and some proposals to use these techniques efficiently to help students are hinted
Resumo:
The system described herein represents the first example of a recommender system in digital ecosystems where agents negotiate services on behalf of small companies. The small companies compete not only with price or quality, but with a wider service-by-service composition by subcontracting with other companies. The final result of these offerings depends on negotiations at the scale of millions of small companies. This scale requires new platforms for supporting digital business ecosystems, as well as related services like open-id, trust management, monitors and recommenders. This is done in the Open Negotiation Environment (ONE), which is an open-source platform that allows agents, on behalf of small companies, to negotiate and use the ecosystem services, and enables the development of new agent technologies. The methods and tools of cyber engineering are necessary to build up Open Negotiation Environments that are stable, a basic condition for predictable business and reliable business environments. Aiming to build stable digital business ecosystems by means of improved collective intelligence, we introduce a model of negotiation style dynamics from the point of view of computational ecology. This model inspires an ecosystem monitor as well as a novel negotiation style recommender. The ecosystem monitor provides hints to the negotiation style recommender to achieve greater stability of an open negotiation environment in a digital business ecosystem. The greater stability provides the small companies with higher predictability, and therefore better business results. The negotiation style recommender is implemented with a simulated annealing algorithm at a constant temperature, and its impact is shown by applying it to a real case of an open negotiation environment populated by Italian companies
Resumo:
In the Biodiversity World (BDW) project we have created a flexible and extensible Web Services-based Grid environment for biodiversity researchers to solve problems in biodiversity and analyse biodiversity patterns. In this environment, heterogeneous and globally distributed biodiversity-related resources such as data sets and analytical tools are made available to be accessed and assembled by users into workflows to perform complex scientific experiments. One such experiment is bioclimatic modelling of the geographical distribution of individual species using climate variables in order to predict past and future climate-related changes in species distribution. Data sources and analytical tools required for such analysis of species distribution are widely dispersed, available on heterogeneous platforms, present data in different formats and lack interoperability. The BDW system brings all these disparate units together so that the user can combine tools with little thought as to their availability, data formats and interoperability. The current Web Servicesbased Grid environment enables execution of the BDW workflow tasks in remote nodes but with a limited scope. The next step in the evolution of the BDW architecture is to enable workflow tasks to utilise computational resources available within and outside the BDW domain. We describe the present BDW architecture and its transition to a new framework which provides a distributed computational environment for mapping and executing workflows in addition to bringing together heterogeneous resources and analytical tools.
Resumo:
Tuberculosis (TB) is one of the most common infectious diseases known to man and responsible for millions of human deaths in the world. The increasing incidence of TB in developing countries, the proliferation of multidrug resistant strains, and the absence of resources for treatment have highlighted the need of developing new drugs against TB. The shikimate pathway leads to the biosynthesis of chorismate, a precursor of aromatic amino acids. This pathway is absent from mammals and shown to be essential for the survival of Mycobacterium tuberculosis, the causative agent of TB. Accordingly, enzymes of aromatic amino acid biosynthesis pathway represent promising targets for structure-based drug design. The first reaction in phenylalanine biosynthesis involves the conversion of chorismate to prephenate, catalyzed by chorismate mutase. The second reaction is catalyzed by prephenate dehydratase (PDT) and involves decarboxylation and dehydratation of prephenate to form phenylpyruvate, the precursor of phenylalanine. Here, we describe utilization of different techniques to infer the structure of M. tuberculosis PDT (MtbPDT) in solution. Small angle X-ray scattering and ultracentrifugation analysis showed that the protein oligomeric state is a tetramer and MtbPDT is a flat disk protein. Bioinformatics tools were used to infer the structure of MtbPDT A molecular model for MtbPDT is presented and molecular dynamics simulations indicate that MtbPDT i.s stable. Experimental and molecular modeling results were in agreement and provide evidence for a tetrameric state of MtbPDT in solution.
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A methodology for analyzing the solar access and its influence on both air temperature and thermal comfort of the urban environment was here developed by applying the potentiality of GIS tools. Urban canyons in a specific area of a Brazilian medium sized city were studied. First, a computational algorithm was applied in order to allow the determination of sky view factors (SVF) and sun-paths in urban canyons. Then, air temperatures in 40 measurement points were collected within the study area. Solar radiation values of these canyons were determined and subsequently stored in a GIS database. The creation of thermal maps for the whole neighbourhood was possible due to a statistical treatment of the data, by promoting the interpolation of values. All data could then be spatially cross-examined. In addition, thermal comfort maps for summer and winter periods were generated. The methodology allowed the identification of thermal tendencies within the neighbourhood, what can be useful in the conception of guidelines for urban planning purposes.
Resumo:
This work evaluates the spatial distribution of normalised rates of droplet breakage and droplet coalescence in liquidliquid dispersions maintained in agitated tanks at operation conditions normally used to perform suspension polymerisation reactions. Particularly, simulations are performed with multiphase computational fluid dynamics (CFD) models to represent the flow field in liquidliquid styrene suspension polymerisation reactors for the first time. CFD tools are used first to compute the spatial distribution of the turbulent energy dissipation rates (e) inside the reaction vessel; afterwards, normalised rates of droplet breakage and particle coalescence are computed as functions of e. Surprisingly, multiphase simulations showed that the rates of energy dissipation can be very high near the free vortex surfaces, which has been completely neglected in previous works. The obtained results indicate the existence of extremely large energy dissipation gradients inside the vessel, so that particle breakage occurs primarily in very small regions that surround the impeller and the free vortex surface, while particle coalescence takes place in the liquid bulk. As a consequence, particle breakage should be regarded as an independent source term or a boundary phenomenon. Based on the obtained results, it can be very difficult to justify the use of isotropic assumptions to formulate particle population balances in similar systems, even when multiple compartment models are used to describe the fluid dynamic behaviour of the agitated vessel. (C) 2011 Canadian Society for Chemical Engineering
Resumo:
Several recent studies in literature have identified brain morphological alterations associated to Borderline Personality Disorder (BPD) patients. These findings are reported by studies based on voxel-based-morphometry analysis of structural MRI data, comparing mean gray-matter concentration between groups of BPD patients and healthy controls. On the other hand, mean differences between groups are not informative about the discriminative value of neuroimaging data to predict the group of individual subjects. In this paper, we go beyond mean differences analyses, and explore to what extent individual BPD patients can be differentiated from controls (25 subjects in each group), using a combination of automated-morphometric tools for regional cortical thickness/volumetric estimation and Support Vector Machine classifier. The approach included a feature selection step in order to identify the regions containing most discriminative information. The accuracy of this classifier was evaluated using the leave-one-subject-out procedure. The brain regions indicated as containing relevant information to discriminate groups were the orbitofrontal, rostral anterior cingulate, posterior cingulate, middle temporal cortices, among others. These areas, which are distinctively involved in emotional and affect regulation of BPD patients, were the most informative regions to achieve both sensitivity and specificity values of 80% in SVM classification. The findings suggest that this new methodology can add clinical and potential diagnostic value to neuroimaging of psychiatric disorders. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.