983 resultados para Software visualization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of histological sections has long been a valuable tool in the pathological studies. The interpretation of tissue conditions, however, relies directly on visual evaluation of tissue slides, which may be difficult to interpret because of poor contrast or poor color differentiation. The Chromatic Contrast Visualization System (CCV) combines an optical microscope with electronically controlled light-emitting diodes (LEDs) in order to generate adjustable intensities of RGB channels for sample illumination. While most image enhancement techniques rely on software post-processing of an image acquired under standard illumination conditions, CCV produces real-time variations in the color composition of the light source itself. The possibility of covering the entire RGB chromatic range, combined with the optical properties of the different tissues, allows for a substantial enhancement in image details. Traditional image acquisition methods do not exploit these visual enhancements which results in poorer visual distinction among tissue structures. Photodynamic therapy (PDT) procedures are of increasing interest in the treatment of several forms of cancer. This study uses histological slides of rat liver samples that were induced to necrosis after being exposed to PDT. Results show that visualization of tissue structures could be improved by changing colors and intensities of the microscope light source. PDT-necrosed tissue samples are better differentiated when illuminated with different color wavelengths, leading to an improved differentiation of cells in the necrosis area. Due to the potential benefits it can bring to interpretation and diagnosis, further research in this field could make CCV an attractive technique for medical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advanced Building Energy Data Visualization is a way to detect performance problems in commercialbuildings. By placing sensors in a building that collects data from example, air temperature and electricalpower, then makes it possible to calculate the data in Data Visualization software. This softwaregenerates visual diagrams so the building manager or building operator can see if for example thepower consumption is to high.A first step (before sensors are installed in a building) to see how the energy consumption is in abuilding can be to use a Benchmarking Tool. There is a number of Benchmarking Tools that is availablefor free on the Internet. Each tool have a bit different approach, but they all show how much energyconsumption there is in a building compared to other similar buildings.In this study a new web design for the benchmarking tool CalARCH has been developed. CalARCHis developed at the Berkeley Lab in Berkeley, California, USA. CalARCH uses data collected only frombuildings in California, and is only for comparing buildings in California with other similar buildingsin the state.Five different versions of the web site were made. Then a web survey was done to determine whichversion would be the best for CalARCH. The results showed that Version 5 and Version 3 was the best.Then a new version was made, based on these two versions. This study was made at the LawrenceBerkeley Laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the current need of the industry to integrate data of the beginning of production originating from of several sources and of transforming them in useful information for sockets of decisions, a search exists every time larger for systems of visualization of information that come to collaborate with that functionality. On the other hand, a common practice nowadays, due to the high competitiveness of the market, it is the development of industrial systems that possess characteristics of modularity, distribution, flexibility, scalability, adaptation, interoperability, reusability and access through web. Those characteristics provide an extra agility and a larger easiness in adapting to the frequent changes of demand of the market. Based on the arguments exposed above, this work consists of specifying a component-based architecture, with the respective development of a system based on that architecture, for the visualization of industrial data. The system was conceived to be capable to supply on-line information and, optionally, historical information of variables originating from of the beginning of production. In this work it is shown that the component-based architecture developed possesses the necessary requirements for the obtaining of a system robust, reliable and of easy maintenance, being, like this, in agreement with the industrial needs. The use of that architecture allows although components can be added, removed or updated in time of execution, through a manager of components through web, still activating more the adaptation process and updating of the system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose the Interperception paradigm, a new approach that includes a set of rules and a software architecture for merge users from different interfaces in the same virtual environment. The system detects the user resources and provide transformations on the data in order to allow its visualization in 3D, 2D and textual (1D) interfaces. This allows any user to connect, access information, and exchange information with other users in a feasible way, without needs of changing hardware or software. As results are presented two virtual environments builded acording this paradigm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model-oriented strategies have been used to facilitate products customization in the software products lines (SPL) context and to generate the source code of these derived products through variability management. Most of these strategies use an UML (Unified Modeling Language)-based model specification. Despite its wide application, the UML-based model specification has some limitations such as the fact that it is essentially graphic, presents deficiencies regarding the precise description of the system architecture semantic representation, and generates a large model, thus hampering the visualization and comprehension of the system elements. In contrast, architecture description languages (ADLs) provide graphic and textual support for the structural representation of architectural elements, their constraints and interactions. This thesis introduces ArchSPL-MDD, a model-driven strategy in which models are specified and configured by using the LightPL-ACME ADL. Such strategy is associated to a generic process with systematic activities that enable to automatically generate customized source code from the product model. ArchSPLMDD strategy integrates aspect-oriented software development (AOSD), modeldriven development (MDD) and SPL, thus enabling the explicit modeling as well as the modularization of variabilities and crosscutting concerns. The process is instantiated by the ArchSPL-MDD tool, which supports the specification of domain models (the focus of the development) in LightPL-ACME. The ArchSPL-MDD uses the Ginga Digital TV middleware as case study. In order to evaluate the efficiency, applicability, expressiveness, and complexity of the ArchSPL-MDD strategy, a controlled experiment was carried out in order to evaluate and compare the ArchSPL-MDD tool with the GingaForAll tool, which instantiates the process that is part of the GingaForAll UML-based strategy. Both tools were used for configuring the products of Ginga SPL and generating the product source code

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient technique to cut polygonal meshes as a step in the geometric modeling of topographic and geological data has been developed. In boundary represented models of outcropping strata and faulted horizons polygonal meshes often intersect each other. TRICUT determines the line of intersection and re-triangulates the area of contact. Along this line the mesh is split in two or more parts which can be selected for removal. The user interaction takes place in the 3D-model space. The intersection, selection and removal are under graphic control. The visualization of outcropping geological structures in digital terrain models is improved by determining intersections against a slightly shifted terrain model. Thus, the outcrop line becomes a surface which overlaps the terrain in its initial position. The area of this overlapping surface changes with respect to the strike and dip of the structure, the morphology and the offset. Some applications of TRICUT on different real datasets are shown. TRICUT is implemented in C+ + using the Visualization Toolkit in conjunction with the RAPID and TRIANGLE libraries. The program runs under LINUX and UNIX using the MESA OpenGL library. This work gives an example of solving a complex 3D geometric problem by integrating available robust public domain software. (C) 2002 Elsevier B.V. Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hemoglobinopathies were included in the Brazilian Neonatal Screening Program on June 6, 2001. Automated high-performance liquid chromatography (HPLC) was indicated as one of the diagnostic methods. The amount of information generated by these systems is immense, and the behavior of groups cannot always be observed in individual analyses. Three-dimensional (3-D) visualization techniques can be applied to extract this information, for extracting patterns, trends or relations from the results stored in databases. We applied the 3-D visualization tool to analyze patterns in the results of hemoglobinopathy based on neonatal diagnosis by HPLC. The laboratory results of 2520 newborn analyses carried out in 2001 and 2002 were used. The Fast, F1, F and A peaks, which were detected by the analytical system, were chosen as attributes for mapping. To establish a behavior pattern, the results were classified into groups according to hemoglobin phenotype: normal (N = 2169), variant (N = 73) and thalassemia (N = 279). 3-D visualization was made with the FastMap DB tool; there were two distribution patterns in the normal group, due to variation in the amplitude of the values obtained by HPLC for the F1 window. It allowed separation of the samples with normal Hb from those with alpha thalassemia, based on a significant difference (P > 0.05) between the mean values of the Fast and A peaks, demonstrating the need for better evaluation of chromatograms; this method could be used to help diagnose alpha thalassemia in newborns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topographical surfaces can be represented with a good degree of accuracy by means of maps. However these are not always the best tools for the understanding of more complex reliefs. In this sense, the greatest contribution of this work is to specify and to implement the architecture of an opensource software system capable of representing TIN (Triangular Irregular Network) based digital terrain models. The system implementation follows the object oriented programming and generic paradigms enabling the integration of various opensource tools such as GDAL, OGR, OpenGL, OpenSceneGraph and Qt. Furthermore, the representation core of the system has the ability to work with multiple topological data structures from which can be extracted, in constant time, all the connectivity relations between the entities vertices, edges and faces existing in a planar triangulation what helps enormously the implementation for real time applications. This is an important capability, for example, in the use of laser survey data (Lidar, ALS, TLS), allowing for the generation of triangular mesh models in the order of millions of points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.