9 resultados para Transfinite convex dimension
em Universidad Politécnica de Madrid
Resumo:
The present contribution discusses the development of a PSE-3D instability analysis algorithm, in which a matrix forming and storing approach is followed. Alternatively to the typically used in stability calculations spectral methods, new stable high-order finitedifference-based numerical schemes for spatial discretization 1 are employed. Attention is paid to the issue of efficiency, which is critical for the success of the overall algorithm. To this end, use is made of a parallelizable sparse matrix linear algebra package which takes advantage of the sparsity offered by the finite-difference scheme and, as expected, is shown to perform substantially more efficiently than when spectral collocation methods are used. The building blocks of the algorithm have been implemented and extensively validated, focusing on classic PSE analysis of instability on the flow-plate boundary layer, temporal and spatial BiGlobal EVP solutions (the latter necessary for the initialization of the PSE-3D), as well as standard PSE in a cylindrical coordinates using the nonparallel Batchelor vortex basic flow model, such that comparisons between PSE and PSE-3D be possible; excellent agreement is shown in all aforementioned comparisons. Finally, the linear PSE-3D instability analysis is applied to a fully three-dimensional flow composed of a counter-rotating pair of nonparallel Batchelor vortices.
Resumo:
Turbulent mixing is a very important issue in the study of geophysical phenomena because most fluxes arising in geophysics fluids are turbulent. We study turbulent mixing due to convection using a laboratory experimental model with two miscible fluids of different density with an initial top heavy density distribution. The fluids that form the initial unstable stratification are miscible and the turbulence will produce molecular mixing. The denser fluid comes into the lighter fluid layer and it generates several forced plumes which are gravitationally unstable. As the turbulent plumes develop, the denser fluid comes into contact with the lighter fluid layer and the mixing process grows. Their development is caused by the lateral interaction between these plumes at the complex fractal surface between the dense and light fluids
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
In this paper we prove that if U is an open subset of a metrizable locally convex space E of infinite dimension, the space H(U) of all holomorphic functions on U, endowed with the Nachbin–Coeuré topology τδ, is not metrizable. Our result can be applied to get that, for all usual topologies, H(U) is metrizable if and only if E has finite dimension. RESUMEN. En este artículo se demuestra que si U es un abierto en un espacio E localmente convexo metrizable de dimensión infinita y H(U) es el espacio de funciones holomorfas en U, entonces la topología de Nachbin-Coeuré en H(U) no es metrizable. Este resultado se utiliza para demostrar que las topologías habituales en H(U) son metrizables si y sólo si E tiene dimensión finita.
Resumo:
Lacunarity as a means of quantifying textural properties of spatial distributions suggests a classification into three main classes of the most abundant soils that cover 92% of Europe. Soils with a well-defined self-similar structure of the linear class are related to widespread spatial patterns that are nondominant but ubiquitous at continental scale. Fractal techniques have been increasingly and successfully applied to identify and describe spatial patterns in natural sciences. However, objects with the same fractal dimension can show very different optical properties because of their spatial arrangement. This work focuses primary attention on the geometrical structure of the geographical patterns of soils in Europe. We made use of the European Soil Database to estimate lacunarity indexes of the most abundant soils that cover 92% of the surface of Europe and investigated textural properties of their spatial distribution. We observed three main classes corresponding to three different patterns that displayed the graphs of lacunarity functions, that is, linear, convex, and mixed. They correspond respectively to homogeneous or self-similar, heterogeneous or clustered and those in which behavior can change at different ranges of scales. Finally, we discuss the pedological implications of that classification.
Resumo:
We propose a new measure to characterize the dimension of complex networks based on the ergodic theory of dynamical systems. This measure is derived from the correlation sum of a trajectory generated by a random walker navigating the network, and extends the classical Grassberger-Procaccia algorithm to the context of complex networks. The method is validated with reliable results for both synthetic networks and real-world networks such as the world air-transportation network or urban networks, and provides a computationally fast way for estimating the dimensionality of networks which only relies on the local information provided by the walkers.
Resumo:
We show the existence of sets with n points (n ? 4) for which every convex decomposition contains more than (35/32)n?(3/2) polygons,which refutes the conjecture that for every set of n points there is a convex decomposition with at most n+C polygons. For sets having exactly three extreme pointswe show that more than n+sqr(2(n ? 3))?4 polygons may be necessary to form a convex decomposition.
Resumo:
This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.
Resumo:
Taking advantage of economic opportunities has led to numerous conflicts between society and business in various geographies of the world. Companies have developed social responsibility programs to prevent and manage these types of problems. However, some authors comment that these programs lack a strategic vision. Starting with the Working with People model, created for the field of rural development planning, this paper proposes a methodology to prevent the generation of social conflicts from business strategy: the territorial dimension. The proposal emphasizes that local development support prevents the generation of social conflicts. Finally, an experience in Peru, a country that has been characterized in recent years by high economic growth and also by the presence of social conflicts that have stopped entrepreneurship is analyzed.