40 resultados para Multiresolution Visualization
em University of Queensland eSpace - Australia
Resumo:
Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.
Resumo:
Terrain can be approximated by a triangular mesh consisting millions of 3D points. Multiresolution triangular mesh (MTM) structures are designed to support applications that use terrain data at variable levels of detail (LOD). Typically, an MTM adopts a tree structure where a parent node represents a lower-resolution approximation of its descendants. Given a region of interest (ROI) and a LOD, the process of retrieving the required terrain data from the database is to traverse the MTM tree from the root to reach all the nodes satisfying the ROI and LOD conditions. This process, while being commonly used for multiresolution terrain visualization, is inefficient as either a large number of sequential I/O operations or fetching a large amount of extraneous data is incurred. Various spatial indexes have been proposed in the past to address this problem, however level-by-level tree traversal remains a common practice in order to obtain topological information among the retrieved terrain data. A new MTM data structure called direct mesh is proposed. We demonstrate that with direct mesh the amount of data retrieval can be substantially reduced. Comparing with existing MTM indexing methods, a significant performance improvement has been observed for real-life terrain data.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
The progressive changes in the water distribution within rabbit muscles were studied by nuclear magnetic resonance microscopy during the first 24 h postmortem. T-2 images revealed development of interspersed lines with higher signal intensities in the muscle, reflecting formation of channels containing mobile water. The appearance of the interspersed lines progressed throughout the measuring period and became increasingly evident. After about 3 h postmortem the signal intensity also increased in areas near the surface of the samples, which reflects migration of the mobile water to the sample surface. Proton density images showed the presence of a chemical shift artifact in the interspersed lines, implying that the intrinsic development of water channels progressed in close proximity to the connective tissue. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
A diligent and careful examination of the mouth and oral structures has been historically deficient in revealing premalignant and malignant oral lesions. Conventional screening practice for oral neoplastic lesions involves visual scrutiny of the oral tissues with the naked eye under projected incandescent or halogen illumination. Visualization is the principal strategy used to find patients with lesions at risk for malignant transformation; hence, any procedure which highlights neoplastic lesions should aid the clinician. This pilot study examined the usefulness of acetic acid wash and chemiluminescent light (Vizilite) in enhancing visualization of oral mucosal white lesions, and its ability to highlight malignant and potentially malignant lesions. Fifty five patients referred for assessment of a white lesion, were prospectively screened with Vizilite, and an incisional biopsy performed for a definitive diagnosis. The age, sex, and smoking status of all patients were recorded, and all lesions were photographed. The visibility, location, size, border, and presence of satellite lesions, were also recorded. The Vizilite tool enhanced intraoral visualization of 26 white lesions, but it could not distinguish between epithelial hyperplasia, dysplasia, or carcinoma. Indeed, all lesions appeared ‘‘aceto-white’’, regardless of the definitive diagnosis. On one occasion, Vizilite aided in the identification of a satellite lesion that was not observed by routine visual inspection. Vizilite appears to be a useful visualization tool, but it does not aid in the identification of malignant and potentially malignant lesions of the oral mucosa.
Resumo:
Localization of signaling complexes to specific micro-domains coordinates signal transduction at the plasma membrane. Using immunogold electron microscopy of plasma membrane sheets coupled with spatial point pattern analysis, we have visualized morphologically featureless microdomains including lipid rafts, in situ and at high resolution. We find that an inner-plasma membrane lipid raft marker displays cholesterol-dependent clustering in microdomains with a mean diameter of 44 nm that occupy 35% of the cell surface. Cross-linking an outer-leaflet raft protein results in the redistribution of inner leaflet rafts, but they retain their modular structure. Analysis of Ras microlocalization shows that inactive H-ras is distributed between lipid rafts and a cholesterol-independent micro-domain. Conversely, activated H-ras and K-ras reside predominantly in nonoverlapping, cholesterol-independent microdomains. Galectin-1 stabilizes the association of activated H-ras with these nonraft microdomains, whereas K-ras clustering is supported by farnesylation, but not geranylgeranylation. These results illustrate that the inner plasma membrane comprises a complex mosaic of discrete microdomains. Differential spatial localization within this framework can likely account for the distinct signal outputs from the highly homologous Ras proteins.
Resumo:
Flows of complex fluids need to be understood at both macroscopic and molecular scales, because it is the macroscopic response that controls the fluid behavior, but the molecular scale that ultimately gives rise to rheological and solid-state properties. Here the flow field of an entangled polymer melt through an extended contraction, typical of many polymer processes, is imaged optically and by small-angle neutron scattering. The dual-probe technique samples both the macroscopic stress field in the flow and the microscopic configuration of the polymer molecules at selected points. The results are compared with a recent tube model molecular theory of entangled melt flow that is able to calculate both the stress and the single-chain structure factor from first principles. The combined action of the three fundamental entangled processes of reptation, contour length fluctuation, and convective constraint release is essential to account quantitatively for the rich rheological behavior. The multiscale approach unearths a new feature: Orientation at the length scale of the entire chain decays considerably more slowly than at the smaller entanglement length.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
For some physics students, the concept of a particle travelling faster than the speed of light holds endless fascination, and. Cerenkov radiation is a visible consequence of a charged particle travelling through a medium at locally superluminal velocities. The Heaviside-Feynman equations for calculating the magnetic and electric fields of a moving charge have been known for many decades, but it is only recently that the computing power to plot the fields of such a particle has become readily available for student use. This paper investigates and illustrates the calculation of Maxwell's D field in homogeneous isotropic media for arbitrary, including superluminal, constant velocity, and uses the results as a basis for discussing energy transfer in the electromagnetic field.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly