105 resultados para meshing
Resumo:
The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.
Resumo:
Thermal fatigue analysis based on 2D finite difference and 3D finite element methods is carried out to study the performance of solar panel structure during micro-satellite life time. Solar panel primary structure consists of honeycomb structure and composite laminates. The 2D finite difference (I-DEAS) model yields predictions of the temperature profile during one orbit. Then, 3D finite element analysis (ANSYS) is applied to predict thermal fatigue damage of solar panel structure. Meshing the whole structure with 2D multi-layer shell elements with sandwich option is not efficient, as it misses thermal response of the honeycomb structure. So we applied a mixed approach between 3D solid and 2D shell elements to model the solar panel structure without the sandwich option.
Resumo:
Integrating analysis and design models is a complex task due to differences between the models and the architectures of the toolsets used to create them. This complexity is increased with the use of many different tools for specific tasks during an analysis process. In this work various design and analysis models are linked throughout the design lifecycle, allowing them to be moved between packages in a way not currently available. Three technologies named Cellular Modeling, Virtual Topology and Equivalencing are combined to demonstrate how different finite element meshes generated on abstract analysis geometries can be linked to their original geometry. Establishing the equivalence relationships between models enables analysts to utilize multiple packages for specialist tasks without worrying about compatibility issues or rework.
Resumo:
New techniques are presented for using the medial axis to generate high quality decompositions for generating block-structured meshes with well-placed mesh singularities away from the surface boundaries. Established medial axis based meshing algorithms are highly effective for some geometries, but in general, they do not produce the most favourable decompositions, particularly when there are geometry concavities. This new approach uses both the topological and geometric information in the medial axis to establish a valid and effective arrangement of mesh singularities for any 2-D surface. It deals with concavities effectively and finds solutions that are most appropriate to the geometric shapes. Methods for directly constructing the corresponding decompositions are also put forward.
Resumo:
Virtual topology operations have been utilized to generate an analysis topology definition suitable for downstream mesh generation. Detailed descriptions are provided for virtual topology merge and split operations for all topological entities, where virtual decompositions are robustly linked to the underlying geometry. Current virtual topology technology is extended to allow the virtual partitioning of volume cells. A valid description of the topology, including relative orientations, is maintained which enables downstream interrogations to be performed on the analysis topology description, such as determining if a specific meshing strategy can be applied to the virtual volume cells. As the virtual representation is a true non-manifold description of the sub-divided domain the interfaces between cells are recorded automatically. Therefore, the advantages of non-manifold modelling are exploited within the manifold modelling environment of a major commercial CAD system without any adaptation of the underlying CAD model. A hierarchical virtual structure is maintained where virtual entities are merged or partitioned. This has a major benefit over existing solutions as the virtual dependencies here are stored in an open and accessible manner, providing the analyst with the freedom to create, modify and edit the analysis topology in any preferred sequence.
Resumo:
The automatic generation of structured multi-block quadrilateral (quad) and hexahedral (hex) meshes has been researched for many years without definitive success. The core problem in quad / hex mesh generation is the placement of mesh singularities to give the desired mesh orientation and distribution [1]. It is argued herein that existing approaches (medial axis, paving / plastering, cross / frame fields) are actually alternative views of the same concept. Using the information provided by the different approaches provides additional insight into the problem.
Resumo:
This paper proposes a method for the design of gear tooth profiles using parabolic curve as its line of action. A mathematical model, including the equation of the line of action, the equation of the tooth profile, and the equation of the conjugate tooth profile, is developed based on the meshing theory. The equation of undercutting condition is derived from the model. The influences of the two design parameters, that present the size (or shape) of the parabolic curve relative to the gear size, on the shape of tooth profiles and on the contact ratio are also studied through the design of an example drive. The strength, including the contact and the bending stresses, of the gear drive designed by using the proposed method is analyzed by an FEA simulation. A comparison of the above characteristics of the gear drive designed with the involute gear drive is also carried out in this work. The results confirm that the proposed design method is more flexible to control the shape of the tooth profile by changing the parameters of the parabola.
Resumo:
Ce mémoire s'inscrit dans le domaine de la vision par ordinateur. Elle s'intéresse à la calibration de systèmes de caméras stéréoscopiques, à la mise en correspondance caméra-projecteur, à la reconstruction 3D, à l'alignement photométrique de projecteurs, au maillage de nuages de points, ainsi qu'au paramétrage de surfaces. Réalisé dans le cadre du projet LightTwist du laboratoire Vision3D, elle vise à permettre la projection sur grandes surfaces arbitraires à l'aide de plusieurs projecteurs. Ce genre de projection est souvent utilisé en arts technologiques, en théâtre et en projection architecturale. Dans ce mémoire, on procède au calibrage des caméras, suivi d'une reconstruction 3D par morceaux basée sur une méthode active de mise en correspondance, la lumière non structurée. Après un alignement et un maillage automatisés, on dispose d'un modèle 3D complet de la surface de projection. Ce mémoire introduit ensuite une nouvelle approche pour le paramétrage de modèles 3D basée sur le calcul efficace de distances géodésiques sur des maillages. L'usager n'a qu'à délimiter manuellement le contour de la zone de projection sur le modèle. Le paramétrage final est calculé en utilisant les distances obtenues pour chaque point du modèle. Jusqu'à maintenant, les méthodes existante ne permettaient pas de paramétrer des modèles ayant plus d'un million de points.
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
This paper describes a novel template-based meshing approach for generating good quality quadrilateral meshes from 2D digital images. This approach builds upon an existing image-based mesh generation technique called Imeshp, which enables us to create a segmented triangle mesh from an image without the need for an image segmentation step. Our approach generates a quadrilateral mesh using an indirect scheme, which converts the segmented triangle mesh created by the initial steps of the Imesh technique into a quadrilateral one. The triangle-to-quadrilateral conversion makes use of template meshes of triangles. To ensure good element quality, the conversion step is followed by a smoothing step, which is based on a new optimization-based procedure. We show several examples of meshes generated by our approach, and present a thorough experimental evaluation of the quality of the meshes given as examples.
Resumo:
JUSTIFICATIVA E OBJETIVOS: A separação de gêmeos unidos causa grande interesse devido à complexidade da anestesia e cirurgia, à raridade da patologia e às poucas chances de sobrevida. O objetivo desta descrição é o de contribuir para a literatura existente, relatando os desafios encontrados por nossa equipe no atendimento à cirurgia-anestesia de separação de gêmeos isquiópagos. RELATO do CASO: Pacientes gêmeos, nascidos a termo, de parto cesariano, pesando juntos 5.100 g, classificados como isquiópagus tetrapus. Duas equipes anestésico-cirúrgicas estavam presentes, sendo o procedimento anestésico esquematizado com aparelho de anestesia, cardioscópio, capnógrafo, oxímetro de pulso, termômetro elétrico, estetoscópio esofágico, todos em dobro. Realizou-se indução anestésica com halotano e fentanil, com os gêmeos em posição lateral e com rotação da cabeça em 45º para facilitar a intubação traqueal. Os recém-nascidos foram mantidos em ventilação controlada manualmente, utilizando o sistema de Rees-Baraka. A anestesia foi mantida com halotano, oxigênio e fentanil. Durante o per-operatório, foram encontrados órgãos abdominais duplos, com exceção do cólon, que era único. As bexigas e os ísquios estavam ligados. Ao final da cirurgia as duas crianças apresentavam-se com sinais vitais estáveis. Os gêmeos permaneceram na Unidade de Terapia Intensiva (UTI) Neonatal por quatro semanas e receberam alta em bom estado geral. CONCLUSÕES: Ressalta-se a importância do entrosamento da equipe, do estudo retrospectivo multidisciplinar, da monitorização adequada e acurada observação clínica; todos esses fatores contribuíram para a boa evolução e alta dos gêmeos.