978 resultados para Image mesh modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scene classification based on latent Dirichlet allocation (LDA) is a more general modeling method known as a bag of visual words, in which the construction of a visual vocabulary is a crucial quantization process to ensure success of the classification. A framework is developed using the following new aspects: Gaussian mixture clustering for the quantization process, the use of an integrated visual vocabulary (IVV), which is built as the union of all centroids obtained from the separate quantization process of each class, and the usage of some features, including edge orientation histogram, CIELab color moments, and gray-level co-occurrence matrix (GLCM). The experiments are conducted on IKONOS images with six semantic classes (tree, grassland, residential, commercial/industrial, road, and water). The results show that the use of an IVV increases the overall accuracy (OA) by 11 to 12% and 6% when it is implemented on the selected and all features, respectively. The selected features of CIELab color moments and GLCM provide a better OA than the implementation over CIELab color moment or GLCM as individuals. The latter increases the OA by only ∼2 to 3%. Moreover, the results show that the OA of LDA outperforms the OA of C4.5 and naive Bayes tree by ∼20%. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.8.083690]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new reconstruction method for diffuse optical tomography using reduced-order models of light transport in tissue. The models, which directly map optical tissue parameters to optical flux measurements at the detector locations, are derived based on data generated by numerical simulation of a reference model. The reconstruction algorithm based on the reduced-order models is a few orders of magnitude faster than the one based on a finite element approximation on a fine mesh incorporating a priori anatomical information acquired by magnetic resonance imaging. We demonstrate the accuracy and speed of the approach using a phantom experiment and through numerical simulation of brain activation in a rat's head. The applicability of the approach for real-time monitoring of brain hemodynamics is demonstrated through a hypercapnic experiment. We show that our results agree with the expected physiological changes and with results of a similar experimental study. However, by using our approach, a three-dimensional tomographic reconstruction can be performed in ∼3  s per time point instead of the 1 to 2 h it takes when using the conventional finite element modeling approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply a numerical model of time-dependent ionospheric convection to two directly driven reconnection pulses during a 15-min interval of southward IMF on 26 November 2000. The model requires an input magnetopause reconnection rate variation, which is here derived from the observed variation in the upstream IMF clock angle, q. The reconnection rate is mapped to an ionospheric merging gap, the MLT extent of which is inferred from the Doppler-shifted Lyman-a emission on newly opened field lines, as observed by the FUV instrument on the IMAGE spacecraft. The model is used to reproduce a variety of features observed during this event: SuperDARN observations of the ionospheric convection pattern and transpolar voltage; FUV observations of the growth of patches of newly opened flux; FUVand in situ observations of the location of the Open-Closed field line Boundary (OCB) and a cusp ion step. We adopt a clock angle dependence of the magnetopause reconnection electric field, mapped to the ionosphere, of the form Enosin4(q/2) and estimate the peak value, Eno, by matching observed and modeled variations of both the latitude, LOCB, of the dayside OCB (as inferred from the equatorward edge of cusp proton emissions seen by FUV) and the transpolar voltage FPC (as derived using the mapped potential technique from SuperDARN HF radar data). This analysis also yields the time constant tOCB with which the open-closed boundary relaxes back toward its equilibrium configuration. For the case studied here, we find tOCB = 9.7 ± 1.3 min, consistent with previous inferences from the observed response of ionospheric flow to southward turnings of the IMF. The analysis confirms quantitatively the concepts of ionospheric flow excitation on which the model is based and explains some otherwise anomalous features of the cusp precipitation morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We employ a numerical model of cusp ion precipitation and proton aurora emission to fit variations of the peak Doppler-shifted Lyman-a intensity observed on 26 November 2000 by the SI-12 channel of the FUV instrument on the IMAGE satellite. The major features of this event appeared in response to two brief swings of the interplanetary magnetic field (IMF) toward a southward orientation. We reproduce the observed spatial distributions of this emission on newly opened field lines by combining the proton emission model with a model of the response of ionospheric convection. The simulations are based on the observed variations of the solar wind proton temperature and concentration and the interplanetary magnetic field clock angle. They also allow for the efficiency, sampling rate, integration time and spatial resolution of the FUV instrument. The good match (correlation coefficient 0.91, significant at the 98% level) between observed and modeled variations confirms the time constant (about 4 min) for the rise and decay of the proton emissions predicted by the model for southward IMF conditions. The implications for the detection of pulsed magnetopause reconnection using proton aurora are discussed for a range of interplanetary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a novel template-based meshing approach for generating good quality quadrilateral meshes from 2D digital images. This approach builds upon an existing image-based mesh generation technique called Imeshp, which enables us to create a segmented triangle mesh from an image without the need for an image segmentation step. Our approach generates a quadrilateral mesh using an indirect scheme, which converts the segmented triangle mesh created by the initial steps of the Imesh technique into a quadrilateral one. The triangle-to-quadrilateral conversion makes use of template meshes of triangles. To ensure good element quality, the conversion step is followed by a smoothing step, which is based on a new optimization-based procedure. We show several examples of meshes generated by our approach, and present a thorough experimental evaluation of the quality of the meshes given as examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

http://digitalcommons.colby.edu/atlasofmaine2008/1022/thumbnail.jpg

Relevância:

30.00% 30.00%

Publicador:

Resumo:

http://digitalcommons.colby.edu/atlasofmaine2006/1022/thumbnail.jpg

Relevância:

30.00% 30.00%

Publicador:

Resumo:

http://digitalcommons.colby.edu/atlasofmaine2008/1017/thumbnail.jpg

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vertical stream bed erosion has been studied routinely and its modeling is getting widespread acceptance. The same cannot be said with lateral stream bank erosion since its measurement or numerical modeling is very challenging. Bank erosion, however, can be important to channel morphology. It may contribute significantly to the overall sediment budget of a stream, is a leading cause of channel migration, and is the cause of major channel maintenance. However, combined vertical and lateral channel evolution is seldom addressed. In this study, a new geofluival numerical model is developed to simulate combined vertical and lateral channel evolution. Vertical erosion is predicted with a 2D depth-averaged model SRH-2D, while lateral erosion is simulated with a linear retreat bank erosion model developed in this study. SRH-2D and the bank erosion model are coupled together both spatially and temporally through a common mesh and the same time advancement. The new geofluvial model is first tested and verified using laboratory meander channels; good agreement are obtained between predicted bank retreat and measured data. The model is then applied to a 16-kilometer reach of Chosui River, Taiwan. Vertical and lateral channel evolution during a three-year period (2004 to 2007) is simulated and results are compared with the field data. It is shown that the geofluvial model correctly captures all major erosion and deposition patterns. The new model is shown to be useful for identifying potential erosion sites and providing information for river maintenance planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High amylose cross-linked to different degrees with sodium trimetaphosphate by varying base strength (2% or 4%) and contact time (0.5-4 h) was evaluated as non-compacted systems for sodium diclophenac controlled release. The physical properties and the performance of these products for sodium diclophenac controlled release from non-compacted systems were related to the structures generated at each cross-linking degree. For samples at 2% until 2 h the swelling ability, G' and eta* values increased with the cross-linking degree, because the longer polymer chains became progressively more entangled and linked. This increases water uptake and holding, favoring the swelling and resulting in systems with higher viscosities. Additionally, the increase of cross-linking degree should contribute for a more elastic structure. The shorter chains with more inter-linkages formed at higher cross-linking degrees (2%4h and 4%) make water caption and holding difficult, decreasing the swelling, viscosity and elasticity. For 2% samples, the longer drug release time exhibited for 2%4h sample indicates that the increase of swelling and viscosity contribute for a more sustained drug release, but the mesh size of the polymeric network seems to be determinant for the attachment of drug molecules. For the 4% samples, smaller meshes size should determine less sustained release of drug. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semi-automatic building detection and extraction is a topic of growing interest due to its potential application in such areas as cadastral information systems, cartographic revision, and GIS. One of the existing strategies for building extraction is to use a digital surface model (DSM) represented by a cloud of known points on a visible surface, and comprising features such as trees or buildings. Conventional surface modeling using stereo-matching techniques has its drawbacks, the most obvious being the effect of building height on perspective, shadows, and occlusions. The laser scanner, a recently developed technological tool, can collect accurate DSMs with high spatial frequency. This paper presents a methodology for semi-automatic modeling of buildings which combines a region-growing algorithm with line-detection methods applied over the DSM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an individual designing prosthesis for surgical use and proposes a methodology for such design through mathematical extrapolation of data from digital images obtained via tomography of individual patient's bones. Individually tailored prosthesis designed to fit particular patient requirements as accurately as possible should result in more successful reconstruction, enable better planning before surgery and consequently fewer complications during surgery. Fast and accurate design and manufacture of personalized prosthesis for surgical use in bone replacement or reconstruction is potentially feasible through the application and integration of several different existing technologies, which are each at different stages of maturity. Initial case study experiments have been undertaken to validate the research concepts by making dimensional comparisons between a bone and a virtual model produced using the proposed methodology and a future research directions are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. © 2013 E. Pavarino et al.