910 resultados para 3D feature extraction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e. g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework`s applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes a novel methodology for automatic contour extraction from 2D images of 3D neurons (e.g. camera lucida images and other types of 2D microscopy). Most contour-based shape analysis methods cannot be used to characterize such cells because of overlaps between neuronal processes. The proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking stage iteratively labels all valid pixel of branches, tip to a critical region, where it determines the suitable direction to proceed. Finally, the labeled skeleton segments are followed in order to yield the parametric contour of the neuronal shape under analysis. The reported system was successfully tested with respect to several images and the results from a set of three neuron images are presented here, each pertaining to a different class, i.e. alpha, delta and epsilon ganglion cells, containing a total of 34 crossings. The algorithms successfully got across all these overlaps. The method has also been found to exhibit robustness even for images with close parallel segments. The proposed method is robust and may be implemented in an efficient manner. The introduction of this approach should pave the way for more systematic application of contour-based shape analysis methods in neuronal morphology. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project was developed as a partnership between the Laboratory of Stratigraphical Analyses of the Geology Department of UFRN and the company Millennium Inorganic Chemicals Mineração Ltda. This company is located in the north end of the paraiban coast, in the municipal district of Mataraca. Millennium has as main prospected product, heavy minerals as ilmenita, rutilo and zircon presents in the sands of the dunes. These dunes are predominantly inactive, and overlap the superior portion of Barreiras Formation rocks. The mining happens with the use of a dredge that is emerged at an artificial lake on the dunes. This dredge removes sand dunes of the bottom lake (after it disassembles of the lake borders with water jets) and directs for the concentration plant, through piping where the minerals are then separate. The present work consisted in the acquisition external geometries of the dunes, where in the end a 3D Static Model could be set up of these sedimentary deposits with emphasis in the behavior of the structural top of Barreiras Formation rocks (inferior limit of the deposit). The knowledge of this surface is important in the phase of the plowing planning for the company, because a calculation mistake can do with that the dredge works too close of this limit, taking the risk that fragments can cause obstruction in the dredge generating a financial damage so much in the equipment repair as for the stopped days production. During the field stages (accomplished in 2006 and 2007) topographical techniques risings were used with Total Station and Geodesic GPS as well as shallow geophysical acquisitions with GPR (Ground Penetrating Radar). It was acquired almost 10,4km of topography and 10km of profiles GPR. The Geodesic GPS was used for the data geopositioning and topographical rising of a traverse line with 630m of extension in the stage of 2007. The GPR was shown a reliable method, ecologically clean, fast acquisition and with a low cost in relation to traditional methods as surveys. The main advantage of this equipment is obtain a continuous information to superior surface Barreiras Formation rocks. The static models 3D were elaborated starting from the obtained data being used two specific softwares for visualization 3D: GoCAD 2.0.8 and Datamine. The visualization 3D allows a better understanding of the Barreiras surface behavior as well as it makes possible the execution of several types of measurements, favoring like calculations and allowing that procedures used for mineral extraction is used with larger safety

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase in the number of spatial data collected has motivated the development of geovisualisation techniques, aiming to provide an important resource to support the extraction of knowledge and decision making. One of these techniques are 3D graphs, which provides a dynamic and flexible increase of the results analysis obtained by the spatial data mining algorithms, principally when there are incidences of georeferenced objects in a same local. This work presented as an original contribution the potentialisation of visual resources in a computational environment of spatial data mining and, afterwards, the efficiency of these techniques is demonstrated with the use of a real database. The application has shown to be very interesting in interpreting obtained results, such as patterns that occurred in a same locality and to provide support for activities which could be done as from the visualisation of results. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The algorithm creates a buffer area around the cartographic features of interest in one of the images and compare it with the other one. During the comparison, the algorithm calculates the number of equals and different points and uses it to calculate the statistical values of the analysis. One calculated statistical value is the correctness, which shows the user the percentage of points that were correctly extracted. Another one is the completeness that shows the percentage of points that really belong to the interest feature. And the third value shows the idea of quality obtained by the extraction method, since that in order to calculate the quality the algorithm uses the correctness and completeness previously calculated. All the performed tests using this algorithm were possible to use the statistical values calculated to represent quantitatively the quality obtained by the extraction method executed. So, it is possible to say that the developed algorithm can be used to analyze extraction methods of cartographic features of interest, since that the results obtained were promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the physicochemical properties of avocado pulp of four different varieties (Avocado, Guatemala, Dickinson, and Butter pear) and to identify which has the greatest potential for oil extraction. Fresh avocado pulp was characterized by moisture, protein, fat, ash, carbohydrates and energy contents were determined. The carotenoids and chlorophyll contents were determined by the organic solvent extraction method. The results showed significant differences in the composition of the fruit when varieties are compared. However, the striking feature in all varieties is high lipid content; Avocado and Dickinson are the most suitable varieties for oil extraction, taking into account moisture content and the levels of lipids in the pulp. Moreover, it could be said that the variety Dickinson is the most affected by the parameters evaluated in terms of overall quality. Chlorophyll and carotenoids, fat-soluble pigments, showed a negative correlation with respect to lipids since it could be related to its function in the fruit. The varieties Avocado and Dickinson are an alternative to oil extraction having great commercial potential to be exploited thus avoiding waste and increasing farmers income.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supercritical Emulsion Extraction technology (SEE-C) was proposed for the production of poly-lactic-co-glycolic acid microcarriers. SEE-C operating parameters as pressure, temperature and flow rate ratios were analyzed and the process performance was optimized in terms of size distribution and encapsulation efficiency. Microdevices loaded with bovine serum insulin were produced with different sizes (2 and 3 µm) or insulin charges (3 and 6 mg/g) and with an encapsulation efficiency of 60%. The microcarriers were characterized in terms of insulin release profile in two different media (PBS and DMEM) and the diffusion and degradation constants were also estimated by using a mathematical model. PLGA microdevices were also used in a cultivation of embryonic ventricular myoblasts (cell line H9c2 obtained from rat) in a FBS serum free medium to monitor cell viability and growth in dependence of insulin released. Good cell viability and growth were observed on 3 µm microdevices loaded with 3 mg/g of insulin. PLGA microspheres loaded with growth factors (GFs) were charged into alginate scaffold with human Mesenchimal Steam Cells (hMSC) for bone tissue engineering with the aim of monitoring the effect of the local release of these signals on cells differentiation. These “living” 3D scaffolds were incubated in a direct perfusion tubular bioreactor to enhance nutrient transport and exposing the cells to a given shear stress. Different GFs such as, h-VEGF, h-BMP2 and a mix of two (ratio 1:1) were loaded and alginate beads were recovered from dynamic (tubular perfusion system bioreactor) and static culture at different time points (1st, 7th, 21st days) for the analytical assays such as, live/dead; alkaline phosphatase; osteocalcin; osteopontin and Van Kossa Immunoassay. The immunoassay confirmed always a better cells differentiation in the bioreactor with respect to the static culture and revealed a great influence of the BMP-2 released in the scaffold on cell differentiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il framework in oggetto, è un ambiente ideato con lo scopo di applicare tecniche di Machine Learning (in particolare le Random Forest) alle funzionalità dell'algoritmo di stereo matching SGM (Semi Global Matching), al fine di incrementarne l'accuratezza in versione standard. Scopo della presente tesi è quello di modificare alcune impostazioni di tale framework rendendolo un ambiente che meglio si adatti alla direzionalità delle scanline (introducendo finestre di supporto rettangolari e ortogonali e il training di foreste separate in base alla singola scanline) e ampliarne le funzionalità tramite l'aggiunta di alcune nuove feature, quali la distanza dal più vicino edge direzionale e la distintività calcolate sulle immagini Left della stereo pair e gli edge direzionali sulle mappe di disparità. Il fine ultimo sarà quello di eseguire svariati test sui dataset Middlebury 2014 e KITTI e raccogliere dati che descrivano l'andamento in positivo o negativo delle modifiche effettuate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently developed computer applications provide tools for planning cranio-maxillofacial interventions based on 3-dimensional (3D) virtual models of the patient's skull obtained from computed-tomography (CT) scans. Precise knowledge of the location of the mid-facial plane is important for the assessment of deformities and for planning reconstructive procedures. In this work, a new method is presented to automatically compute the mid-facial plane on the basis of a surface model of the facial skeleton obtained from CT. The method matches homologous surface areas selected by the user on the left and right facial side using an iterative closest point optimization. The symmetry plane which best approximates this matching transformation is then computed. This new automatic method was evaluated in an experimental study. The study included experienced and inexperienced clinicians defining the symmetry plane by a selection of landmarks. This manual definition was systematically compared with the definition resulting from the new automatic method: Quality of the symmetry planes was evaluated by their ability to match homologous areas of the face. Results show that the new automatic method is reliable and leads to significantly higher accuracy than the manual method when performed by inexperienced clinicians. In addition, the method performs equally well in difficult trauma situations, where key landmarks are unreliable or absent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hereditary spastic paraparesis (HSP) is a heterogeneous group of neurodegenerative disorders with progressive lower limb spasticity, categorized into pure (p-HSP) and complicated forms (c-HSP). The purpose of this study was to evaluate if brain volumes in HSP were altered compared with a control population. Brain volumes were determined in patients suffering from HSP, including both p-HSP (n = 21) and c-HSP type (n = 12), and 30 age-matched healthy controls, using brain parenchymal fractions (BPF) calculated from 3D MRI data in an observer-independent procedure. In addition, the tissue segments of grey and white matter were analysed separately. In HSP patients, BPF were significantly reduced compared with controls both for the whole patient group (P < 0.001) and for both subgroups, indicating considerable brain atrophy. In contrast to controls who showed a decline of brain volumes with age, this physiological phenomenon was less pronounced in HSP. Therefore, global brain parenchyma reduction, involving both grey and white matter, seems to be a feature in both subtypes of HSP. Atrophy was more pronounced in c-HSP, consistent with the more severe phenotype including extramotor involvement. Thus, global brain atrophy, detected by MRI-based brain volume quantification, is a biological marker in HSP subtypes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic identification and extraction of bone contours from X-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated X-ray images. The automatic initialization is solved by an estimation of Bayesian network algorithm to fit a multiple component geometrical model to the X-ray data. The contour extraction is accomplished by a non-rigid 2D/3D registration between a 3D statistical model and the X-ray images, in which bone contours are extracted by a graphical model based Bayesian inference. Preliminary experiments on clinical data sets verified its validity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.