13 resultados para Time equivalent approach

em Universidad Politécnica de Madrid


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The combination of minimum time control and multiphase converter is a favorable option for dc-dc converters in applications where output voltage variation is required, such as RF amplifiers and dynamic voltage scaling in microprocessors, due to their advantage of fast dynamic response. In this paper, an improved minimum time control approach for multiphase buck converter that is based on charge balance technique, aiming at fast output voltage transition is presented. Compared with the traditional method, the proposed control takes into account the phase delay and current ripple in each phase. Therefore, by investigating the behavior of multiphase converter during voltage transition, it resolves the problem of current unbalance after the transient, which can lead to long settling time of the output voltage. The restriction of this control is that the output voltage that the converter can provide is related to the number of the phases, because only the duty cycles at which the multiphase converter has total ripple cancellation are used in this approach. The model of the proposed control is introduced, and the design constraints of the buck converters filter for this control are discussed. In order to prove the concept, a four-phase buck converter is implemented and the experimental results that validate the proposed control method are presented. The application of this control to RF envelope tracking is also presented in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents a time domain approach to the flutter analysis of a missile-type wing/body configuration with concentrated structural non-linearities. The missile wing is considered fully movable and its rotation angle contains the structural freeplay-type non-linearity. Although a general formulation for flexible configurations is developed, only two rigid degrees of freedom are taken into account for the results: pitching of the whole wing/body configuration and wing rotation angle around its hinge. An unsteady aerodynamic model based on the slender-body approach is used to calculate aerodynamic generalized forces. Limit-cycle oscillations and chaotic motion below the flutter speed are observed in this study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper filis this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compiletime/ run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, aimed at performing as much work as possible at compile-time. The approach is based on the knowledge of certain properties regarding the run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed. Thus, the paper does not deal with the analysis itself, but rather with how the analysis results can be used to parallelize programs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La ecología no solamente ha puesto de manifiesto problemas ambientales, sino que ha confirmado la necesidad de una nueva armonía entre los propios seres humanos y de éstos con la naturaleza y con todos los seres que la habitan. Es necesario un nuevo contrato que determine nuestras relaciones con la Naturaleza (Serrs1), y una nueva Ética para nuestras vidas (Guattari2). La ética medioambiental nos ha dado una visión universal y supra-generacional de la gestión de la naturaleza y como consecuencia, una nueva forma de construir nuestra ‘segunda’ Naturaleza, que es la arquitectura. ¿Qué es lo esencial que esta nueva ética exige para la arquitectura? Este es un momento crucial para reconsiderar los objetivos de la arquitectura, porque lo ‘eco’ está produciendo grandes cambios. ¿Implica esta era post-ecológica una particular ética, es decir, referida a sus fines y medios? ¿Porqué, para qué, para quién, cómo debemos hacer la arquitectura de nuestro tiempo? Es momento de afrontar críticamente el discurso de la eco-arquitectura, e incluso de repensar los propios límites de la arquitectura. El desarrollo actual del conocimiento medioambiental es esencialmente técnico y utilitario, pero ¿es el reto solamente técnico?¿Es suficiente la suma de lo medioambiental-social-económico-cultural para definirla? ¿Hay claves que nos puedan dar la dimensión ética de esta aproximación técnica-empírica? ¿Sabemos lo que estamos haciendo cuando aplicamos este conocimiento? Y, sobre todo, ¿cuál es el sentido de lo que estamos haciendo? La tesis que se propone puede resumirse: De acuerdo con el actual conocimiento que tenemos de la Naturaleza, la Arquitectura de nuestro tiempo deber reconsiderar sus fines y medios, puesto que la ética medioambiental está definiendo nuevos objetivos. Para fundamentar y profundizar en esta afirmación la tesis analiza cómo son hoy día las relaciones entre Ética-Naturaleza-Arquitectura (Fig.1), lo que facilitará las claves de cuáles son los criterios éticos (en cuanto a fines y medios) que deben definir la arquitectura del tiempo de la ecología. ABSTRACT Ecology shows us not only environmental problems; it shows that we need a new balance and harmony between individuals, beings, communities and Nature. We need a new contract with Nature according to Serres576, and a new Ethics for our lives according to Guattari577. Environmental ethics have given us a universal and supra-generational vision of the management of our Nature and, as a consequence, a new way to construct our ‘second’ nature, which is architecture. What is essential for this new architecture that the new ethics demand? This is a critical moment to reconsider the object of architecture, because the ‘eco’ is making significant changes in it. Are there any specifically ethical concerns (ends and means) in the post-ecological era? Why, for what, for whom, how should we make architecture in our times? This is the time to approach the eco-architectural discourse critically and to question the current boundaries of architecture itself: Where is eco-architecture going? The current development of environmental knowledge is essentially technical and utilitarian, but it is its technical aspect the only challenge? Is the sum of environmental-social-economic aspects enough to define it? Are there any clues which can give an ethical sense to this technical-empirical approach? Do we know what we are doing when we apply this knowledge? And overall, what is the meaning of what we are doing? Exploring this subject, this thesis makes a statement: In accordance with the actual knowledge of Nature, Architecture of our time must reconsider its ends and means, since the environmental ethics is defining new objectives. To support that, the thesis analyzes what the relationships between Ethics –Nature- Architecture (Fig. 53) are like nowadays, this will provide the clues of which ethical criteria (ends and means) must architecture of an ecological era define.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although several profiling techniques for identifying performance bottlenecks in logic programs have been developed, they are generally not automatic and in most cases they do not provide enough information for identifying the root causes of such bottlenecks. This complicates using their results for guiding performance improvement. We present a profiling method and tool that provides such explanations. Our profiler associates cost centers to certain program elements and can measure different types of resource-related properties that affect performance, preserving the precedence of cost centers in the call graph. It includes an automatic method for detecting procedures that are performance bottlenecks. The profiling tool has been integrated in a previously developed run-time checking framework to allow verification of certain properties when they cannot be verified statically. The approach allows checking global computational properties which require complex instrumentation tracking information about previous execution states, such as, e.g., that the execution time accumulated by a given procedure is not greater than a given bound. We have built a prototype implementation, integrated it in the Ciao/CiaoPP system and successfully applied it to performance improvement, automatic optimization (e.g., resource-aware specialization of programs), run-time checking, and debugging of global computational properties (e.g., resource usage) in Prolog programs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a method to measure real-valued time series irreversibility which combines two different tools: the horizontal visibility algorithm and the Kullback-Leibler divergence. This method maps a time series to a directed network according to a geometric criterion. The degree of irreversibility of the series is then estimated by the Kullback-Leibler divergence (i.e. the distinguishability) between the in and out degree distributions of the associated graph. The method is computationally efficient and does not require any ad hoc symbolization process. We find that the method correctly distinguishes between reversible and irreversible stationary time series, including analytical and numerical studies of its performance for: (i) reversible stochastic processes (uncorrelated and Gaussian linearly correlated), (ii) irreversible stochastic processes (a discrete flashing ratchet in an asymmetric potential), (iii) reversible (conservative) and irreversible (dissipative) chaotic maps, and (iv) dissipative chaotic maps in the presence of noise. Two alternative graph functionals, the degree and the degree-degree distributions, can be used as the Kullback-Leibler divergence argument. The former is simpler and more intuitive and can be used as a benchmark, but in the case of an irreversible process with null net current, the degree-degree distribution has to be considered to identify the irreversible nature of the series

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Choosing an appropriate accounting system for manufacturing has always been a challenge for managers. In this article we try to compare three accounting systems designed since 1980 to address problems of traditional accounting system. In the first place we are going to present a short overview on background and definition of three accounting systems: Activity Based costing, Time-Driven Activity Based Costing and Lean Accounting. Comparisons are made based on the three basic roles of information generated by accounting systems: financial reporting, decision making, and operational control and improvement. The analysis in this paper reveals how decisions are made over the value stream in the companies using Lean Accounting while decisions under the ABC Accounting system are taken at individual product level, and finally we will show how TD-ABC covers both product and process levels for decision making. In addition, this paper shows the importance of nonfinancial measures for operational control and improvement under the Lean Accounting and TD-ABC methods whereas ABC relies mostly on financial measures in this context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The type of signals obtained has conditioned chaos analysis tools. Almost in every case, they have analogue characteristics. But in certain cases, a chaotic digital signal is obtained and theses signals need a different approach than conventional analogue ones. The main objective of this paper will be to present some possible approaches to the study of this signals and how information about their characteristics may be obtained in the more straightforward possible way. We have obtained digital chaotic signals from an Optical Logic Cell with some feedback between output and one of the possible control gates. This chaos has been reported in several papers and its characteristics have been employed as a possible method to secure communications and as a way to encryption. In both cases, the influence of some perturbation in the transmission medium gave problems both for the synchronization of chaotic generators at emitter and receiver and for the recovering of information data. A proposed way to analyze the presence of some perturbation is to study the noise contents of transmitted signal and to implement a way to eliminate it. In our present case, the digital signal will be converted to a multilevel one by grouping bits in packets of 8 bits and applying conventional methods of time-frequency analysis to them. The results give information about the change in signals characteristics and hence some information about the noise or perturbations present. Equivalent representations to the phase and to the Feigenbaum diagrams for digital signals are employed in this case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to reduce costs and time while improving quality, durability and sustainability in structural concrete constructions, a widely used material nowadays, special care must be taken in some crucial phases of the project and execution, including the structure design and calculation, the dosage, dumping and curing of concrete: another important aspect is the proper design and execution of assembly plans and construction details. The framework, a name designating the whole reinforcement bars cage already assembled as shown in the drawings, can be made up of several components and implies higher or lower industrialization degree. The framework costs constitute about one third of the price per cubic meter placed in concrete works. The best solutions from all points of view are clearly those involving an easier processing to achieve the same goal, and consequently carrying a high degree of industrialization, meaning quality and safety in the work. This thesis aims to provide an indepth analysis of a relatively new type of anchoring by plate known as headed reinforcement bars, which can potentially replace standard or L-shaped hooks, improving the cleaning of construction details and enabling a faster, more flexible, and therefore a more economical assembly. A literature review on the topic and an overview of typical applications is provided, followed by some examples of specific applications in real projects. Since a strict theoretical formulation used to provide the design plate dimensions has not yet been put forward, an equation is proposed for the side-face blowout strength of the anchorage, based on the capacity of concrete to carry concentrated loads in cases in which no transverse reinforcement is provided. The correlation of the calculated ultimate load with experimental results available in the literature is given. Besides, the proposed formulation can be expanded to cases in which a certain development length is available: using a software for nonlinear finite element analysis oriented to the study of reinforced concrete, numerical tests on the bond-bearing interaction are performed. The thesis ends with a testing of eight corner joints subjected to a closing moment, held in the Structures Laboratory of the Polytechnic University of Madrid, aiming to check whether the design of such plates as stated is adequate for these elements and whether an element with plate-anchored reinforcement is equivalent to one with a traditional construction detail.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses an uplink power control dynamic game where we assume that each user battery represents the system state that changes with time following a discrete-time version of a differential game. To overcome the complexity of the analysis of a dynamic game approach we focus on the concept of Dynamic Potential Games showing that the game can be solved as an equivalent Multivariate Optimum Control Problem. The solution of this problem is quite interesting because different users split the activity in time, avoiding higher interferences and providing a long term fairness.