862 resultados para progressive mesh
Resumo:
We construct an empirically informed computational model of fiscal federalism, testing whether horizontal or vertical equalization can solve the fiscal externality problem in an environment in which heterogeneous agents can move and vote. The model expands on the literature by considering the case of progressive local taxation. Although the consequences of progressive taxation under fiscal federalism are well understood, they have not been studied in a context with tax equalization, despite widespread implementation. The model also expands on the literature by comparing the standard median voter model with a realistic alternative voting mechanism. We find that fiscal federalism with progressive taxation naturally leads to segregation as well as inefficient and inequitable public goods provision while the alternative voting mechanism generates more efficient, though less equitable, public goods provision. Equalization policy, under both types of voting, is largely undermined by micro-actors' choices. For this reason, the model also does not find the anticipated effects of vertical equalization discouraging public goods spending among wealthy jurisdictions and horizontal encouraging it among poor jurisdictions. Finally, we identify two optimal scenarios, superior to both complete centralization and complete devolution. These scenarios are not only Pareto optimal, but also conform to a Rawlsian view of justice, offering the best possible outcome for the worst-off. Despite offering the best possible outcomes, both scenarios still entail significant economic segregation and inequitable public goods provision. Under the optimal scenarios agents shift the bulk of revenue collection to the federal government, with few jurisdictions maintaining a small local tax.
Resumo:
We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder.
Resumo:
Mesh adaptation based on error estimation has become a key technique to improve th eaccuracy o fcomputational-fluid-dynamics computations. The adjoint-based approach for error estimation is one of the most promising techniques for computational-fluid-dynamics applications. Nevertheless, the level of implementation of this technique in the aeronautical industrial environment is still low because it is a computationally expensive method. In the present investigation, a new mesh refinement method based on estimation of truncation error is presented in the context of finite-volume discretization. The estimation method uses auxiliary coarser meshes to estimate the local truncation error, which can be used for driving an adaptation algorithm. The method is demonstrated in the context of two-dimensional NACA0012 and three-dimensional ONERA M6 wing inviscid flows, and the results are compared against the adjoint-based approach and physical sensors based on features of the flow field.
Resumo:
An automatic Mesh Generation Preprocessor for BE Programs with a considerable of capabilities has been developed. This program allows almost any kind of geometry and tipology to be defined with a small amount of external data, and with an important approximation of the boundary geometry. Also the error checking possibility is very important for a fast comprobation of the results.
Resumo:
Desarrollo de algoritmo de interpolación basado en descomposición octree y funciones radiales de soporte compacto para movimiento de mallas en problemas aerolásticos
Resumo:
Finite element hp-adaptivity is a technology that allows for very accurate numerical solutions. When applied to open region problems such as radar cross section prediction or antenna analysis, a mesh truncation method needs to be used. This paper compares the following mesh truncation methods in the context of hp-adaptive methods: Infinite Elements, Perfectly Matched Layers and an iterative boundary element based methodology. These methods have been selected because they are exact at the continuous level (a desirable feature required by the extreme accuracy delivered by the hp-adaptive strategy) and they are easy to integrate with the logic of hp-adaptivity. The comparison is mainly based on the number of degrees of freedom needed for each method to achieve a given level of accuracy. Computational times are also included. Two-dimensional examples are used, but the conclusions directly extrapolated to the three dimensional case.
Resumo:
A simplified CFD wake model based on the actuator disk concept is used to simulate the wind turbine, represented by a disk upon which a distribution of forces, defined as axial momentum sources, are applied on the incoming non-uniform flow. The rotor is supposed to be uniformly loaded, with the exerted forces function of the incident wind speed, the thrust coefficient and the rotor diameter. The model is tested under different parameterizations of turbulence models and validated through experimental measurements downwind of a wind turbine in terms of wind speed deficit and turbulence intensity.
Resumo:
Applications that operate on meshes are very popular in High Performance Computing (HPC) environments. In the past, many techniques have been developed in order to optimize the memory accesses for these datasets. Different loop transformations and domain decompositions are com- monly used for structured meshes. However, unstructured grids are more challenging. The memory accesses, based on the mesh connectivity, do not map well to the usual lin- ear memory model. This work presents a method to improve the memory performance which is suitable for HPC codes that operate on meshes. We develop a method to adjust the sequence in which the data are used inside the algorithm, by means of traversing and sorting the mesh. This sorted mesh can be transferred sequentially to the lower memory levels and allows for minimum data transfer requirements. The method also reduces the lower memory requirements dra- matically: up to 63% of the L1 cache misses are removed in a traditional cache system. We have obtained speedups of up to 2.58 on memory operations as measured in a general- purpose CPU. An improvement is also observed with se- quential access memories, where we have observed reduc- tions of up to 99% in the required low-level memory size.
Resumo:
The solution to the problem of finding the optimum mesh design in the finite element method with the restriction of a given number of degrees of freedom, is an interesting problem, particularly in the applications method. At present, the usual procedures introduce new degrees of freedom (remeshing) in a given mesh in order to obtain a more adequate one, from the point of view of the calculation results (errors uniformity). However, from the solution of the optimum mesh problem with a specific number of degrees of freedom some useful recommendations and criteria for the mesh construction may be drawn. For 1-D problems, namely for the simple truss and beam elements, analytical solutions have been found and they are given in this paper. For the more complex 2-D problems (plane stress and plane strain) numerical methods to obtain the optimum mesh, based on optimization procedures have to be used. The objective function, used in the minimization process, has been the total potential energy. Some examples are presented. Finally some conclusions and hints about the possible new developments of these techniques are also given.