982 resultados para cartographic visualization methods
Resumo:
The integration of nanostructured films containing biomolecules and silicon-based technologies is a promising direction for reaching miniaturized biosensors that exhibit high sensitivity and selectivity. A challenge, however, is to avoid cross talk among sensing units in an array with multiple sensors located on a small area. In this letter, we describe an array of 16 sensing units, of a light-addressable potentiometric sensor (LAPS), which was made with layer-by-Layer (LbL) films of a poly(amidomine) dendrimer (PAMAM) and single-walled carbon nanotubes (SWNTs), coated with a layer of the enzyme penicillinase. A visual inspection of the data from constant-current measurements with liquid samples containing distinct concentrations of penicillin, glucose, or a buffer indicated a possible cross talk between units that contained penicillinase and those that did not. With the use of multidimensional data projection techniques, normally employed in information Visualization methods, we managed to distinguish the results from the modified LAPS, even in cases where the units were adjacent to each other. Furthermore, the plots generated with the interactive document map (IDMAP) projection technique enabled the distinction of the different concentrations of penicillin, from 5 mmol L(-1) down to 0.5 mmol L(-1). Data visualization also confirmed the enhanced performance of the sensing units containing carbon nanotubes, consistent with the analysis of results for LAPS sensors. The use of visual analytics, as with projection methods, may be essential to handle a large amount of data generated in multiple sensor arrays to achieve high performance in miniaturized systems.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The control of molecular architectures has been exploited in layer-by-layer (LbL) films deposited on Au interdigitated electrodes, thus forming an electronic tongue (e-tongue) system that reached an unprecedented high sensitivity (down to 10-12 M) in detecting catechol. Such high sensitivity was made possible upon using units containing the enzyme tyrosinase, which interacted specifically with catechol, and by processing impedance spectroscopy data with information visualization methods. These latter methods, including the parallel coordinates technique, were also useful for identifying the major contributors to the high distinguishing ability toward catechol. Among several film architectures tested, the most efficient had a tyrosinase layer deposited atop LbL films of alternating layers of dioctadecyldimethylammonium bromide (DODAB) and 1,2-dipalmitoyl-sn-3-glycero-fosfo-rac-(1-glycerol) (DPPG), viz., (DODAB/DPPG)5/DODAB/Tyr. The latter represents a more suitable medium for immobilizing tyrosinase when compared to conventional polyelectrolytes. Furthermore, the distinction was more effective at low frequencies where double-layer effects on the film/liquid sample dominate the electrical response. Because the optimization of film architectures based on information visualization is completely generic, the approach presented here may be extended to designing architectures for other types of applications in addition to sensing and biosensing. © 2013 American Chemical Society.
Resumo:
In this paper we discuss the detection of glucose and triglycerides using information visualization methods to process impedance spectroscopy data. The sensing units contained either lipase or glucose oxidase immobilized in layer-by-layer (LbL) films deposited onto interdigitated electrodes. The optimization consisted in identifying which part of the electrical response and combination of sensing units yielded the best distinguishing ability. It is shown that complete separation can be obtained for a range of concentrations of glucose and triglyceride when the interactive document map (IDMAP) technique is used to project the data into a two-dimensional plot. Most importantly, the optimization procedure can be extended to other types of biosensors, thus increasing the versatility of analysis provided by tailored molecular architectures exploited with various detection principles. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A novel protein superfamily with over 600 members was discovered by iterative profile searches and analyzed with powerful bioinformatics and information visualization methods. Evidence exists that these proteins generate a radical species by reductive cleavage of S-adenosylmethionine (SAM) through an unusual Fe-S center. The superfamily (named here Radical SAM) provides evidence that radical-based catalysis is important in a number of previously well- studied but unresolved biochemical pathways and reflects an ancient conserved mechanistic approach to difficult chemistries. Radical SAM proteins catalyze diverse reactions, including unusual methylations, isomerization, sulfur insertion, ring formation, anaerobic oxidation and protein radical formation. They function in DNA precursor, vitamin, cofactor, antibiotic and herbicide biosynthesis and in biodegradation pathways. One eukaryotic member is interferon-inducible and is considered a candidate drug target for osteoporosis; another is observed to bind the neuronal Cdk5 activator protein. Five defining members not previously recognized as homologs are lysine 2,3-aminomutase, biotin synthase, lipoic acid synthase and the activating enzymes for pyruvate formate-lyase and anaerobic ribonucleotide reductase. Two functional predictions for unknown proteins are made based on integrating other data types such as motif, domain, operon and biochemical pathway into an organized view of similarity relationships.
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
Wydział Nauk Geograficznych i Geologicznych
Resumo:
The objective of this work was to apply visualization methods to the experimental study of cornstarch dust-air mixture combustion in a closed vessel volume under microgravity conditions. A dispersion system with a small scale of turbulence was used in the experiments. A gas igniter initiated combustion of the dust-air mixture in the central or top part of the vessel. Flame propagation through the quiescent mixture was recorded by a high-speed video camera. Experiments showed a very irregular flame front and irregular distribution of the regions with local reactions of re-burning behind the flame front. at a later stage of combustion. Heat transfer from the hot combustion products to the walls is shown to have an important role in the combustion development. The maximum pressure and maximum rate of pressure rise were higher for flame propagation from the vessel center than for flame developed from the top pan of the vessel. The reason for smaller increase of the rate of pressure rise, for the flame developed from the top of the vessel. in comparison with that developed from the vessel center, was much faster increase of the contact surface of the combustion gases with the vessel walls. It was found that in dust flames only small part of hear was released at the flame front, the remaining part being released far behind it.
Resumo:
As Terabyte datasets become the norm, the focus has shifted away from our ability to produce and store ever larger amounts of data, onto its utilization. It is becoming increasingly difficult to gain meaningful insights into the data produced. Also many forms of the data we are currently producing cannot easily fit into traditional visualization methods. This paper presents a new and novel visualization technique based on the concept of a Data Forest. Our Data Forest has been designed to be used with vir tual reality (VR) as its presentation method. VR is a natural medium for investigating large datasets. Our approach can easily be adapted to be used in a variety of different ways, from a stand alone single user environment to large multi-user collaborative environments. A test application is presented using multi-dimensional data to demonstrate the concepts involved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
Cerebrovascular diseases are significant causes of death and disability in humans. Improvements in diagnostic and therapeutic approaches strongly rely on adequate gyrencephalic, large animal models being demanded for translational research. Ovine stroke models may represent a promising approach but are currently limited by insufficient knowledge regarding the venous system of the cerebral angioarchitecture. The present study was intended to provide a comprehensive anatomical analysis of the intracranial venous system in sheep as a reliable basis for the interpretation of experimental results in such ovine models. We used corrosion casts as well as contrast-enhanced magnetic resonance venography to scrutinize blood drainage from the brain. This combined approach yielded detailed and, to some extent, novel findings. In particular, we provide evidence for chordae Willisii and lateral venous lacunae, and report on connections between the dorsal and ventral sinuses in this species. For the first time, we also describe venous confluences in the deep cerebral venous system and an 'anterior condylar confluent' as seen in humans. This report provides a detailed reference for the interpretation of venous diagnostic imaging findings in sheep, including an assessment of structure detectability by in vivo (imaging) versus ex vivo (corrosion cast) visualization methods. Moreover, it features a comprehensive interspecies-comparison of the venous cerebral angioarchitecture in man, rodents, canines and sheep as a relevant large animal model species, and describes possible implications for translational cerebrovascular research.
Resumo:
Multidimensional compound optimization is a new paradigm in the drug discovery process, yielding efficiencies during early stages and reducing attrition in the later stages of drug development. The success of this strategy relies heavily on understanding this multidimensional data and extracting useful information from it. This paper demonstrates how principled visualization algorithms can be used to understand and explore a large data set created in the early stages of drug discovery. The experiments presented are performed on a real-world data set comprising biological activity data and some whole-molecular physicochemical properties. Data visualization is a popular way of presenting complex data in a simpler form. We have applied powerful principled visualization methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), to help the domain experts (screening scientists, chemists, biologists, etc.) understand and draw meaningful decisions. We also benchmark these principled methods against relatively better known visualization approaches, principal component analysis (PCA), Sammon's mapping, and self-organizing maps (SOMs), to demonstrate their enhanced power to help the user visualize the large multidimensional data sets one has to deal with during the early stages of the drug discovery process. The results reported clearly show that the GTM and HGTM algorithms allow the user to cluster active compounds for different targets and understand them better than the benchmarks. An interactive software tool supporting these visualization algorithms was provided to the domain experts. The tool facilitates the domain experts by exploration of the projection obtained from the visualization algorithms providing facilities such as parallel coordinate plots, magnification factors, directional curvatures, and integration with industry standard software. © 2006 American Chemical Society.
Resumo:
Archaeologists are often considered frontrunners in employing spatial approaches within the social sciences and humanities, including geospatial technologies such as geographic information systems (GIS) that are now routinely used in archaeology. Since the late 1980s, GIS has mainly been used to support data collection and management as well as spatial analysis and modeling. While fruitful, these efforts have arguably neglected the potential contribution of advanced visualization methods to the generation of broader archaeological knowledge. This paper reviews the use of GIS in archaeology from a geographic visualization (geovisual) perspective and examines how these methods can broaden the scope of archaeological research in an era of more user-friendly cyber-infrastructures. Like most computational databases, GIS do not easily support temporal data. This limitation is particularly problematic in archaeology because processes and events are best understood in space and time. To deal with such shortcomings in existing tools, archaeologists often end up having to reduce the diversity and complexity of archaeological phenomena. Recent developments in geographic visualization begin to address some of these issues, and are pertinent in the globalized world as archaeologists amass vast new bodies of geo-referenced information and work towards integrating them with traditional archaeological data. Greater effort in developing geovisualization and geovisual analytics appropriate for archaeological data can create opportunities to visualize, navigate and assess different sources of information within the larger archaeological community, thus enhancing possibilities for collaborative research and new forms of critical inquiry.