875 resultados para visualization tools
Resumo:
Known as the "king of spices", black pepper (Piper nigrum), a perennial crop of the tropics, is economically the most important and the most widely used spice crop in the world. To understand its suitable bioclimatic distribution, maximum entropy based on ecological niche modeling was used to model the bioclimatic niches of the species in its Asian range. Based on known occurrences, bioclimatic areas with higher probabilities are mainly located in the eastern and western coasts of the Indian Peninsula, the east of Sumatra Island, some areas in the Malay Archipelago, and the southeast coastal areas of China. Some undocumented places were also predicted as suitable areas. According to the jackknife procedure, the minimum temperature of the coldest month, the mean monthly temperature range, and the precipitation of the wettest month were identified as highly effective factors in the distribution of black pepper and could possibly account for the crop's distribution pattern. Such climatic requirements inhibited this species from dispersing and gaining a larger geographical range.
Resumo:
The wide variety of molecular architectures used in sensors and biosensors and the large amount of data generated with some principles of detection have motivated the use of computational methods, such as information visualization techniques, not only to handle the data but also to optimize sensing performance. In this study, we combine projection techniques with micro-Raman scattering and atomic force microscopy (AFM) to address critical issues related to practical applications of electronic tongues (e-tongues) based on impedance spectroscopy. Experimentally, we used sensing units made with thin films of a perylene derivative (AzoPTCD acronym), coating Pt interdigitated electrodes, to detect CuCl(2) (Cu(2+)), methylene blue (MB), and saccharose in aqueous solutions, which were selected due to their distinct molecular sizes and ionic character in solution. The AzoPTCD films were deposited from monolayers to 120 nm via Langmuir-Blodgett (LB) and physical vapor deposition (PVD) techniques. Because the main aspects investigated were how the interdigitated electrodes are coated by thin films (architecture on e-tongue) and the film thickness, we decided to employ the same material for all sensing units. The capacitance data were projected into a 2D plot using the force scheme method, from which we could infer that at low analyte concentrations the electrical response of the units was determined by the film thickness. Concentrations at 10 mu M or higher could be distinguished with thinner films tens of nanometers at most-which could withstand the impedance measurements, and without causing significant changes in the Raman signal for the AzoPTCD film-forming molecules. The sensitivity to the analytes appears to be related to adsorption on the film surface, as inferred from Raman spectroscopy data using MB as analyte and from the multidimensional projections. The analysis of the results presented may serve as a new route to select materials and molecular architectures for novel sensors and biosensors, in addition to suggesting ways to unravel the mechanisms behind the high sensitivity obtained in various sensors.
Resumo:
The installation of induction distributed generators should be preceded by a careful study in order to determine if the point of common coupling is suitable for transmission of the generated power, keeping acceptable power quality and system stability. In this sense, this paper presents a simple analytical formulation that allows a fast and comprehensive evaluation of the maximum power delivered by the induction generator, without losing voltage stability. Moreover, this formulation can be used to identify voltage stability issues that limit the generator output power. All the formulation is developed by using the equivalent circuit of squirrel-cage induction machine. Simulation results are used to validate the method, which enables the approach to be used as a guide to reduce the simulation efforts necessary to assess the maximum output power and voltage stability of induction generators. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures.
Resumo:
We tested the short-term effects of a nonrigid tool, identified as an "anchor system" (e.g., ropes attached to varying weights resting on the floor), on the postural stabilization of blindfolded adults with and without intellectual disabilities (ID). Participants held a pair of anchors one in each hand, under three weight conditions (250 g, 500 g and 1,000 g), while they performed a restricted balance task (standing for 30 s on a balance beam placed on top of a force platform). These conditions were called anchor practice trials. Before and after the practice trials, a condition without anchors was tested. Control practice groups, who practiced blocks of trials without anchors, included individuals with and without ID. The anchor system improved subjects' balance during the standing task, for both groups. For the control groups, the performance of successive trials in the condition without the anchor system showed no improvement in postural stability. The individuals with intellectual disability, as well as their peers without ID, used the haptic cues of nonrigid tools (i.e., the anchor system) to stabilize their posture, and the short-term stabilizing effects appeared to result from their previous use of the anchor system.
Resumo:
Dengue fever is a mosquito-borne viral disease estimated to cause about 230 million infections worldwide every year, of which 25,000 are fatal. Global incidence has risen rapidly in recent decades with some 3.6 billion people, over half of the world's population, now at risk, mainly in urban centres of the tropics and subtropics. Demographic and societal changes, in particular urbanization, globalization, and increased international travel, are major contributors to the rise in incidence and geographic expansion of dengue infections. Major research gaps continue to hamper the control of dengue. The European Commission launched a call under the 7th Framework Programme with the title of 'Comprehensive control of Dengue fever under changing climatic conditions'. Fourteen partners from several countries in Europe, Asia, and South America formed a consortium named 'DengueTools' to respond to the call to achieve better diagnosis, surveillance, prevention, and predictive models and improve our understanding of the spread of dengue to previously uninfected regions (including Europe) in the context of globalization and climate change. The consortium comprises 12 work packages to address a set of research questions in three areas: Research area 1: Develop a comprehensive early warning and surveillance system that has predictive capability for epidemic dengue and benefits from novel tools for laboratory diagnosis and vector monitoring. Research area 2: Develop novel strategies to prevent dengue in children. Research area 3: Understand and predict the risk of global spread of dengue, in particular the risk of introduction and establishment in Europe, within the context of parameters of vectorial capacity, global mobility, and climate change. In this paper, we report on the rationale and specific study objectives of 'DengueTools'. DengueTools is funded under the Health theme of the Seventh Framework Programme of the European Community, Grant Agreement Number: 282589 Dengue Tools.
Resumo:
Objective: The purpose of this study was to analyse the use of digital tools for image enhancement of mandibular radiolucent lesions and the effects of this manipulation on the percentage of correct radiographic diagnoses. Methods: 24 panoramic radiographs exhibiting radiolucent lesions were selected, digitized and evaluated by non-experts (undergraduate and newly graduated practitioners) and by professional experts in oral diagnosis. The percentages of correct and incorrect diagnoses, according to the use of brightness/contrast, sharpness, inversion, highlight and zoom tools, were compared. All dental professionals made their evaluations without (T-1) and with (T-2) a list of radiographic diagnostic parameters. Results: Digital tools were used with low frequency mainly in T-2. The most preferred tool was sharpness (45.2%). In the expert group, the percentage of correct diagnoses did not change when any of the digital tools were used. For the non-expert group, there was an increase in the frequency of correct diagnoses when brightness/contrast was used in T-2 (p = 0.008) and when brightness/contrast and sharpness were not used in T-1 (p = 0.027). The use or non-use of brightness/contrast, zoom and sharpness showed moderate agreement in the group of experts [kappa agreement coefficient (kappa) = 0.514, 0.425 and 0.335, respectively]. For the non-expert group there was slight agreement for all the tools used (kappa <= 0.237). Conclusions: Consulting the list of radiographic parameters before image manipulation reduced the frequency of tool use in both groups of examiners. Consulting the radiographic parameters with the use of some digital tools was important for improving correct diagnosis only in the group of non-expert examiners. Dentomaxillofacial Radiology (2012) 41, 203-210. doi: 10.1259/dmfr/78567773
Resumo:
In this paper we discuss the detection of glucose and triglycerides using information visualization methods to process impedance spectroscopy data. The sensing units contained either lipase or glucose oxidase immobilized in layer-by-layer (LbL) films deposited onto interdigitated electrodes. The optimization consisted in identifying which part of the electrical response and combination of sensing units yielded the best distinguishing ability. It is shown that complete separation can be obtained for a range of concentrations of glucose and triglyceride when the interactive document map (IDMAP) technique is used to project the data into a two-dimensional plot. Most importantly, the optimization procedure can be extended to other types of biosensors, thus increasing the versatility of analysis provided by tailored molecular architectures exploited with various detection principles. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.
Resumo:
Background: The integration of sequencing and gene interaction data and subsequent generation of pathways and networks contained in databases such as KEGG Pathway is essential for the comprehension of complex biological processes. We noticed the absence of a chart or pathway describing the well-studied preimplantation development stages; furthermore, not all genes involved in the process have entries in KEGG Orthology, important information for knowledge application with relation to other organisms. Results: In this work we sought to develop the regulatory pathway for the preimplantation development stage using text-mining tools such as Medline Ranker and PESCADOR to reveal biointeractions among the genes involved in this process. The genes present in the resulting pathway were also used as seeds for software developed by our group called SeedServer to create clusters of homologous genes. These homologues allowed the determination of the last common ancestor for each gene and revealed that the preimplantation development pathway consists of a conserved ancient core of genes with the addition of modern elements. Conclusions: The generation of regulatory pathways through text-mining tools allows the integration of data generated by several studies for a more complete visualization of complex biological processes. Using the genes in this pathway as “seeds” for the generation of clusters of homologues, the pathway can be visualized for other organisms. The clustering of homologous genes together with determination of the ancestry leads to a better understanding of the evolution of such process.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.
Resumo:
Cutting tools with higher wear resistance are those manufactured by powder metallurgy process, which combines the development of materials and design properties, features of shape-making technology and sintering. The annual global market of cutting tools consumes about US$ 12 billion; therefore, any research to improve tool designs and machining process techniques adds value or reduces costs. The aim is to describe the Spark Plasma Sintering (SPS) of cutting tools in functionally gradient materials, to show this structure design suitability through thermal residual stress model and, lastly, to present two kinds of inserts. For this, three cutting tool materials were used (Al2O3-ZrO2, Al2O3-TiC and WC-Co). The samples were sintered by SPS at 1300 °C and 70 MPa. The results showed that mechanical and thermal displacements may be separated during thermal treatment for analysis. Besides, the absence of cracks indicated coherence between experimental results and the residual stresses predicted.
Resumo:
Based on some constructs of the Activity Theory (Leontiev, 1978), we point to the need to develop activities that reveal the meaning of representations. We examine use of representations in teaching and propose some suggestions. Shaaron Ainsworth (1999) asserted that, in order to learn from engaging with multiple representations of scientific concepts, students need to be able to (a) understand the codes and signifiers in a representation, (b) understand the links between the representation and the target concept or process, (c) translate key features of the concept across representations and (d) know which features to emphasize in designing their own representations.