17 resultados para Image mesh modeling
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
The applications of the Finite Element Method (FEM) for three-dimensional domains are already well documented in the framework of Computational Electromagnetics. However, despite the power and reliability of this technique for solving partial differential equations, there are only a few examples of open source codes available and dedicated to the solid modeling and automatic constrained tetrahedralization, which are the most time consuming steps in a typical three-dimensional FEM simulation. Besides, these open source codes are usually developed separately by distinct software teams, and even under conflicting specifications. In this paper, we describe an experiment of open source code integration for solid modeling and automatic mesh generation. The integration strategy and techniques are discussed, and examples and performance results are given, specially for complicated and irregular volumes which are not simply connected. © 2011 IEEE.
Resumo:
This paper describes strategies and techniques to perform modeling and automatic mesh generation of the aorta artery and its tunics (adventitia, media and intima walls), using open source codes. The models were constructed in the Blender package and Python scripts were used to export the data necessary for the mesh generation in TetGen. The strategies proposed are able to provide meshes of complicated and irregular volumes, with a large number of mesh elements involved (12,000,000 tetrahedrons approximately). These meshes can be used to perform computational simulations by Finite Element Method (FEM). © Published under licence by IOP Publishing Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
High amylose cross-linked to different degrees with sodium trimetaphosphate by varying base strength (2% or 4%) and contact time (0.5-4 h) was evaluated as non-compacted systems for sodium diclophenac controlled release. The physical properties and the performance of these products for sodium diclophenac controlled release from non-compacted systems were related to the structures generated at each cross-linking degree. For samples at 2% until 2 h the swelling ability, G' and eta* values increased with the cross-linking degree, because the longer polymer chains became progressively more entangled and linked. This increases water uptake and holding, favoring the swelling and resulting in systems with higher viscosities. Additionally, the increase of cross-linking degree should contribute for a more elastic structure. The shorter chains with more inter-linkages formed at higher cross-linking degrees (2%4h and 4%) make water caption and holding difficult, decreasing the swelling, viscosity and elasticity. For 2% samples, the longer drug release time exhibited for 2%4h sample indicates that the increase of swelling and viscosity contribute for a more sustained drug release, but the mesh size of the polymeric network seems to be determinant for the attachment of drug molecules. For the 4% samples, smaller meshes size should determine less sustained release of drug. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Semi-automatic building detection and extraction is a topic of growing interest due to its potential application in such areas as cadastral information systems, cartographic revision, and GIS. One of the existing strategies for building extraction is to use a digital surface model (DSM) represented by a cloud of known points on a visible surface, and comprising features such as trees or buildings. Conventional surface modeling using stereo-matching techniques has its drawbacks, the most obvious being the effect of building height on perspective, shadows, and occlusions. The laser scanner, a recently developed technological tool, can collect accurate DSMs with high spatial frequency. This paper presents a methodology for semi-automatic modeling of buildings which combines a region-growing algorithm with line-detection methods applied over the DSM.
Resumo:
This paper presents an individual designing prosthesis for surgical use and proposes a methodology for such design through mathematical extrapolation of data from digital images obtained via tomography of individual patient's bones. Individually tailored prosthesis designed to fit particular patient requirements as accurately as possible should result in more successful reconstruction, enable better planning before surgery and consequently fewer complications during surgery. Fast and accurate design and manufacture of personalized prosthesis for surgical use in bone replacement or reconstruction is potentially feasible through the application and integration of several different existing technologies, which are each at different stages of maturity. Initial case study experiments have been undertaken to validate the research concepts by making dimensional comparisons between a bone and a virtual model produced using the proposed methodology and a future research directions are discussed.
Resumo:
The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. © 2013 E. Pavarino et al.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Objective: This study evaluated the effect of quantity of resin composite, C-factor, and geometry in Class V restorations on shrinkage stress after bulk fill insertion of resin using two-dimensional finite element analysis.Methods: An image of a buccolingual longitudinal plane in the middle of an upper first premolar and supporting tissues was used for modeling 10 groups: cylindrical cavity, erosion, and abfraction lesions with the same C-factor (1.57), a second cylindrical cavity and abfraction lesion with the same quantity of resin (QR) as the erosion lesion, and then all repeated with a bevel on the occlusal cavosurface angle. The 10 groups were imported into Ansys 13.0 for two-dimensional finite element analysis. The mesh was built with 30,000 triangle and square elements of 0.1 mm in length for all the models. All materials were considered isotropic, homogeneous, elastic, and linear, and the resin composite shrinkage was simulated by thermal analogy. The maximum principal (MPS) and von Mises stresses (VMS) were analyzed for comparing the behavior of the groups.Results: Different values of angles for the cavosurface margin in enamel and dentin were obtained for all groups and the higher the angle, the lower the stress concentration. When the groups with the same C-factor and QR were compared, the erosion shape cavity showed the highest MPS and VMS values, and abfraction shape, the lowest. A cavosurface bevel decreased the stress values on the occlusal margin. The geometry factor overcame the effects of C-factor and QR in some situations.Conclusion: Within the limitations of the current methodology, it is possible to conclude that the combination of all variables studied influences the stress, but the geometry is the most important factor to be considered by the operator.
Resumo:
This paper presents a new technique to model interfaces by means of degenerated solid finite elements, i.e., elements with a very high aspect ratio, with the smallest dimension corresponding to the thickness of the interfaces. It is shown that, as the aspect ratio increases, the element strains also increase, approaching the kinematics of the strong discontinuity. A tensile damage constitutive relation between strains and stresses is proposed to describe the nonlinear behavior of the interfaces associated with crack opening. To represent crack propagation, couples of triangular interface elements are introduced in between all regular (bulk) elements of the original mesh. With this technique the analyses can be performed integrally in the context of the continuum mechanics and complex crack patterns involving multiple cracks can be simulated without the need of tracking algorithms. Numerical tests are performed to show the applicability of the proposed technique, studding also aspects related to mesh objectivity.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
In this letter, a semiautomatic method for road extraction in object space is proposed that combines a stereoscopic pair of low-resolution aerial images with a digital terrain model (DTM) structured as a triangulated irregular network (TIN). First, we formulate an objective function in the object space to allow the modeling of roads in 3-D. In this model, the TIN-based DTM allows the search for the optimal polyline to be restricted along a narrow band that is overlaid upon it. Finally, the optimal polyline for each road is obtained by optimizing the objective function using the dynamic programming optimization algorithm. A few seed points need to be supplied by an operator. To evaluate the performance of the proposed method, a set of experiments was designed using two stereoscopic pairs of low-resolution aerial images and a TIN-based DTM with an average resolution of 1 m. The experimental results showed that the proposed method worked properly, even when faced with anomalies along roads, such as obstructions caused by shadows and trees.
Resumo:
Modeling is a step to perform a finite element analysis. Different methods of model construction are reported in literature, as the Bio-CAD modeling. The purpose of this study was to perform a model evaluation and application using two methods of Bio-CAD modeling from human edentulous hemi-mandible on the finite element analysis. From CT scans of dried human skull was reconstructed a stereolithographic model. Two methods of modeling were performed: STL conversion approach (Model 1) associated to STL simplification and reverse engineering approach (Model 2). For finite element analysis was used the action of lateral pterygoid muscle as loading condition to assess total displacement (D), equivalent von-Mises stress (VM) and maximum principal stress (MP). Two models presented differences on the geometry regarding surface number (1834 (model 1); 282 (model 2)). Were observed differences in finite element mesh regarding element number (30428 nodes/16683 elements (model 1); 15801 nodes/8410 elements (model 2). D, VM and MP stress areas presented similar distribution in two models. The values were different regarding maximum and minimum values of D (ranging 0-0.511 mm (model 1) and 0-0.544 mm (model 2), VM stress (6.36E-04-11.4 MPa (model 1) and 2.15E-04-14.7 MPa (model 2) and MP stress (-1.43-9.14 MPa (model 1) and -1.2-11.6 MPa (model 2). From two methods of Bio-CAD modeling, the reverse engineering presented better anatomical representation compared to the STL conversion approach. The models presented differences in the finite element mesh, total displacement and stress distribution.
Resumo:
Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.