937 resultados para Mesh smoothing
Resumo:
El trabajo describe la aplicación de redes Mesh (tecnología inalámbrica) en campos de la videovigilancia y ampliación de zonas Wi-Fi, basado en la realización de una red prototipo capaz de soportar tales servicios, todo esto aplicado al caso de estudio del municipio de San José las Flores, Chalatenango, comunidad de escasos recursos, pues uno de los principales objetivo de este trabajo es hacer accesible esta tecnología a las municipalidades más pobres, para ello se utilizaron herramientas y software libres, dejando plasmado en el documento las factibilidades económicas, técnica y operativa, así como los resultados obtenidos durante la investigación
Resumo:
The constant need to improve helicopter performance requires the optimization of existing and future rotor designs. A crucial indicator of rotor capability is hover performance, which depends on the near-body flow as well as the structure and strength of the tip vortices formed at the trailing edge of the blades. Computational Fluid Dynamics (CFD) solvers must balance computational expenses with preservation of the flow, and to limit computational expenses the mesh is often coarsened in the outer regions of the computational domain. This can lead to degradation of the vortex structures which compose the rotor wake. The current work conducts three-dimensional simulations using OVERTURNS, a three-dimensional structured grid solver that models the flow field using the Reynolds-Averaged Navier-Stokes equations. The S-76 rotor in hover was chosen as the test case for evaluating the OVERTURNS solver, focusing on methods to better preserve the rotor wake. Using the hover condition, various computational domains, spatial schemes, and boundary conditions were tested. Furthermore, a mesh adaption routine was implemented, allowing for the increased refinement of the mesh in areas of turbulent flow without the need to add points to the mesh. The adapted mesh was employed to conduct a sweep of collective pitch angles, comparing the resolved wake and integrated forces to existing computational and experimental results. The integrated thrust values saw very close agreement across all tested pitch angles, while the power was slightly over predicted, resulting in under prediction of the Figure of Merit. Meanwhile, the tip vortices have been preserved for multiple blade passages, indicating an improvement in vortex preservation when compared with previous work. Finally, further results from a single collective pitch case were presented to provide a more complete picture of the solver results.
Resumo:
This lecture course covers the theory of so-called duality-based a posteriori error estimation of DG finite element methods. In particular, we formulate consistent and adjoint consistent DG methods for the numerical approximation of both the compressible Euler and Navier-Stokes equations; in the latter case, the viscous terms are discretized based on employing an interior penalty method. By exploiting a duality argument, adjoint-based a posteriori error indicators will be established. Moreover, application of these computable bounds within automatic adaptive finite element algorithms will be developed. Here, a variety of isotropic and anisotropic adaptive strategies, as well as $hp$-mesh refinement will be investigated.
Resumo:
In design and manufacturing, mesh segmentation is required for FACE construction in boundary representation (BRep), which in turn is central for featurebased design, machining, parametric CAD and reverse engineering, among others -- Although mesh segmentation is dictated by geometry and topology, this article focuses on the topological aspect (graph spectrum), as we consider that this tool has not been fully exploited -- We preprocess the mesh to obtain a edgelength homogeneous triangle set and its Graph Laplacian is calculated -- We then produce a monotonically increasing permutation of the Fiedler vector (2nd eigenvector of Graph Laplacian) for encoding the connectivity among part feature submeshes -- Within the mutated vector, discontinuities larger than a threshold (interactively set by a human) determine the partition of the original mesh -- We present tests of our method on large complex meshes, which show results which mostly adjust to BRep FACE partition -- The achieved segmentations properly locate most manufacturing features, although it requires human interaction to avoid over segmentation -- Future work includes an iterative application of this algorithm to progressively sever features of the mesh left from previous submesh removals
Resumo:
Given a 2manifold triangular mesh \(M \subset {\mathbb {R}}^3\), with border, a parameterization of \(M\) is a FACE or trimmed surface \(F=\{S,L_0,\ldots, L_m\}\) -- \(F\) is a connected subset or region of a parametric surface \(S\), bounded by a set of LOOPs \(L_0,\ldots ,L_m\) such that each \(L_i \subset S\) is a closed 1manifold having no intersection with the other \(L_j\) LOOPs -- The parametric surface \(S\) is a statistical fit of the mesh \(M\) -- \(L_0\) is the outermost LOOP bounding \(F\) and \(L_i\) is the LOOP of the ith hole in \(F\) (if any) -- The problem of parameterizing triangular meshes is relevant for reverse engineering, tool path planning, feature detection, redesign, etc -- Stateofart mesh procedures parameterize a rectangular mesh \(M\) -- To improve such procedures, we report here the implementation of an algorithm which parameterizes meshes \(M\) presenting holes and concavities -- We synthesize a parametric surface \(S \subset {\mathbb {R}}^3\) which approximates a superset of the mesh \(M\) -- Then, we compute a set of LOOPs trimming \(S\), and therefore completing the FACE \(F=\ {S,L_0,\ldots ,L_m\}\) -- Our algorithm gives satisfactory results for \(M\) having low Gaussian curvature (i.e., \(M\) being quasi-developable or developable) -- This assumption is a reasonable one, since \(M\) is the product of manifold segmentation preprocessing -- Our algorithm computes: (1) a manifold learning mapping \(\phi : M \rightarrow U \subset {\mathbb {R}}^2\), (2) an inverse mapping \(S: W \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^3\), with \ (W\) being a rectangular grid containing and surpassing \(U\) -- To compute \(\phi\) we test IsoMap, Laplacian Eigenmaps and Hessian local linear embedding (best results with HLLE) -- For the back mapping (NURBS) \(S\) the crucial step is to find a control polyhedron \(P\), which is an extrapolation of \(M\) -- We calculate \(P\) by extrapolating radial basis functions that interpolate points inside \(\phi (M)\) -- We successfully test our implementation with several datasets presenting concavities, holes, and are extremely nondevelopable -- Ongoing work is being devoted to manifold segmentation which facilitates mesh parameterization
Resumo:
In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.
Resumo:
The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.
Resumo:
The effects of an increase in cod end mesh size from 55 to 60 and 70 mm and a change of mesh configuration from 55 mm diamond to 55 turn square mesh on the size selectivity of four by-catch species (the red shrimp Aristeus antennatus, the European hake Merluccius merluccius, the horse mackerel Trachurus trachurus and the blue whiting Micromesistius poutassou) commonly captured in the crustacean fishery off the Portuguese south coast, were evaluated. Selectivity parameters for blue whiting, the most abundant species in the catches, were estimated taking into account between-haul variation, while for the remaining species, captured in much lower quantities, the selectivity estimates were based on pooled data by length class for all hauls within the same cod end. Length at 50% retention, L-50, was found to increase with mesh size and with the change in mesh configuration for all the studied species. For blue whiting trawling depth and cod end catch were found to play a role in between-haul variation by increasing L-50 as well. The results suggest that an increase in the current minimum mesh size of 55-70 mm would be advisable to be compatible with the minimum landing sizes (MLSs) of 29 mm carapace length and 27 cm total length for red shrimp and hake, respectively, while it would greatly reduce the amount of discards, particularly those for blue whiting, that accounted for approximately 50% of the total catch weight. Horse mackerel was the only species for which the use of a larger mesh size would result in a significant escapement of individuals above the MLS of 15 cm. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The effects of an increase in cod end mesh size from 55 to 60 and 70 mm and a change of mesh configuration from diamond to square mesh on the size selectivity for rose shrimp Parapenaeus longirostris and Norway lobster Nephrops norvegicus captured off the Portuguese south coast were evaluated. The results were analysed taking into account between-haul variation in selectivity, and indicate a significant increase in L-50 for rose shrimp with an increase in mesh size or with the use of a square mesh cod end, while for Norway lobster only mesh configuration was found to affect this parameter. Two other important external variables were identified; the trawling depth and the cod end catch, which influence between-haul variation, by increasing the selection range for rose shrimp and Norway lobster, respectively. The results obtained suggest that an increase in the current minimum mesh size of 55 mm would be advisable for rose shrimp in order to respect the minimum landing size of 24 mm carapace length presently established for this species. Moreover, trawling for rose shrimp should be avoided at depths above 200 m, in order to avoid catches consisting almost exclusively of juveniles. Such an increase in mesh size would have a minor impact in terms of losses of individuals above the minimum landing size for Norway lobster and would contribute to reducing the amount of discards in this fishery. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.
Resumo:
Crop monitoring and more generally land use change detection are of primary importance in order to analyze spatio-temporal dynamics and its impacts on environment. This aspect is especially true in such a region as the State of Mato Grosso (south of the Brazilian Amazon Basin) which hosts an intensive pioneer front. Deforestation in this region as often been explained by soybean expansion in the last three decades. Remote sensing techniques may now represent an efficient and objective manner to quantify how crops expansion really represents a factor of deforestation through crop mapping studies. Due to the special characteristics of the soybean productions' farms in Mato Grosso (area varying between 1000 hectares and 40000 hectares and individual fields often bigger than 100 hectares), the Moderate Resolution Imaging Spectroradiometer (MODIS) data with a near daily temporal resolution and 250 m spatial resolution can be considered as adequate resources to crop mapping. Especially, multitemporal vegetation indices (VI) studies have been currently used to realize this task [1] [2]. In this study, 16-days compositions of EVI (MODQ13 product) data are used. However, although these data are already processed, multitemporal VI profiles still remain noisy due to cloudiness (which is extremely frequent in a tropical region such as south Amazon Basin), sensor problems, errors in atmospheric corrections or BRDF effect. Thus, many works tried to develop algorithms that could smooth the multitemporal VI profiles in order to improve further classification. The goal of this study is to compare and test different smoothing algorithms in order to select the one which satisfies better to the demand which is classifying crop classes. Those classes correspond to 6 different agricultural managements observed in Mato Grosso through an intensive field work which resulted in mapping more than 1000 individual fields. The agricultural managements above mentioned are based on combination of soy, cotton, corn, millet and sorghum crops sowed in single or double crop systems. Due to the difficulty in separating certain classes because of too similar agricultural calendars, the classification will be reduced to 3 classes : Cotton (single crop), Soy and cotton (double crop), soy (single or double crop with corn, millet or sorghum). The classification will use training data obtained in the 2005-2006 harvest and then be tested on the 2006-2007 harvest. In a first step, four smoothing techniques are presented and criticized. Those techniques are Best Index Slope Extraction (BISE) [3], Mean Value Iteration (MVI) [4], Weighted Least Squares (WLS) [5] and Savitzky-Golay Filter (SG) [6] [7]. These techniques are then implemented and visually compared on a few individual pixels so that it allows doing a first selection between the five studied techniques. The WLS and SG techniques are selected according to criteria proposed by [8]. Those criteria are: ability in eliminating frequent noises, conserving the upper values of the VI profiles and keeping the temporality of the profiles. Those selected algorithms are then programmed and applied to the MODIS/TERRA EVI data (16-days composition periods). Tests of separability are realized based on the Jeffries-Matusita distance in order to see if the algorithms managed in improving the potential of differentiation between the classes. Those tests are realized on the overall profile (comprising 23 MODIS images) as well as on each MODIS sub-period of the profile [1]. This last test is a double interest process because it allows comparing the smoothing techniques and also enables to select a set of images which carries more information on the separability between the classes. Those selected dates can then be used to realize a supervised classification. Here three different classifiers are tested to evaluate if the smoothing techniques as a particular effect on the classification depending on the classifiers used. Those classifiers are Maximum Likelihood classifier, Spectral Angle Mapper (SAM) classifier and CHAID Improved Decision tree. It appears through the separability tests on the overall process that the smoothed profiles don't improve efficiently the potential of discrimination between classes when compared with the original data. However, the same tests realized on the MODIS sub-periods show better results obtained with the smoothed algorithms. The results of the classification confirm this first analyze. The Kappa coefficients are always better with the smoothing techniques and the results obtained with the WLS and SG smoothed profiles are nearly equal. However, the results are different depending on the classifier used. The impact of the smoothing algorithms is much better while using the decision tree model. Indeed, it allows a gain of 0.1 in the Kappa coefficient. While using the Maximum Likelihood end SAM models, the gain remains positive but is much lower (Kappa improved of 0.02 only). Thus, this work's aim is to prove the utility in smoothing the VI profiles in order to improve the final results. However, the choice of the smoothing algorithm has to be made considering the original data used and the classifier models used. In that case the Savitzky-Golay filter gave the better results.
Resumo:
An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.
Resumo:
In this study, poly (e-caprolactone) [PCL] and its collagen composite blend (PCL=Col) were fabricated to scaffolds using electrospinning method. Incorporated collagen was present on the surface of the fibers, and it modulated the attachment and proliferation of pig bone marrow mesenchymal cells (pBMMCs). Osteogenic differentiation markers were more pronounced when these cells were cultured on PCL=Col fibrous meshes, as determined by immunohistochemistry for collagen type I, osteopontin, and osteocalcin. Matrix mineralization was observed only on osteogenically induced PCL=Col constructs. Long bone analogs were created by wrapping osteogenic cell sheets around the PCL=Col meshes to form hollow cylindrical cell-scaffold constructs. Culturing these constructs under dynamic conditions enhanced bone-like tissue formation and mechanical strength.We conclude that electrospun PCL=Col mesh is a promising material for bone engineering applications. Its combination with osteogenic cell sheets offers a novel and promising strategy for engineering of tubular bone analogs.
Resumo:
Damage localization induced by strain softening can be predicted by the direct minimization of a global energy function. This article concerns the computational strategy for implementing this principle for softening materials such as concrete. Instead of using heuristic global optimization techniques, our strategies are a hybrid of local optimization methods with a path-finding approach to ensure a global optimum. With admissible nodal displacements being independent variables, it is easy to deal with the geometric (mesh) constraint conditions. The direct search optimization methods recover the localized solutions for a range of softening lattice models which are representative of quasi-brittle structures
Resumo:
The Node-based Local Mesh Generation (NLMG) algorithm, which is free of mesh inconsistency, is one of core algorithms in the Node-based Local Finite Element Method (NLFEM) to achieve the seamless link between mesh generation and stiffness matrix calculation, and the seamless link helps to improve the parallel efficiency of FEM. Furthermore, the key to ensure the efficiency and reliability of NLMG is to determine the candidate satellite-node set of a central node quickly and accurately. This paper develops a Fast Local Search Method based on Uniform Bucket (FLSMUB) and a Fast Local Search Method based on Multilayer Bucket (FLSMMB), and applies them successfully to the decisive problems, i.e. presenting the candidate satellite-node set of any central node in NLMG algorithm. Using FLSMUB or FLSMMB, the NLMG algorithm becomes a practical tool to reduce the parallel computation cost of FEM. Parallel numerical experiments validate that either FLSMUB or FLSMMB is fast, reliable and efficient for their suitable problems and that they are especially effective for computing the large-scale parallel problems.