18 resultados para Analytical mechanics
Resumo:
In the present work a seismic retrofitting technique is proposed for masonry infilled reinforced concrete frames based on the replacement of infill panels by K-bracing with vertical shear link. The performance of this technique is evaluated through experimental tests. A simplified numerical model for structural damage evaluation is also formulated according to the notions and principles of continuum damage mechanics. The proposed model is calibrated with the experimental results. The experimental results have shown an excellent energy dissipation capacity with the proposed technique. Likewise, the numerical predictions with the proposed model are in good agreement with experimental results.
Resumo:
We present analytical formulas to estimate the variation of achieved deflection for an Earth-impacting asteroid following a continuous tangential low-thrust deflection strategy. Relatively simple analytical expressions are obtained with the aid of asymptotic theory and the use of Peláez orbital elements set, an approach that is particularly suitable to the asteroid deflection problem and is not limited to small eccentricities. The accuracy of the proposed formulas is evaluated numerically showing negligible error for both early and late deflection campaigns. The results will be of aid in planning future low-thrust asteroid deflection missions
Resumo:
The use of modular or ‘micro’ maximum power point tracking (MPPT) converters at module level in series association, commercially known as “power optimizers”, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array
Resumo:
The aim of this article is to propose an analytical approximate squeeze-film lubrication model of the human ankle joint for a quick assessment of the synovial pressure field and the load carrying due to the squeeze motion. The model starts from the theory of boosted lubrication for the human articular joints lubrication (Walker et al., Rheum Dis 27:512–520, 1968; Maroudas, Lubrication and wear in joints. Sector, London, 1969) and takes into account the fluid transport across the articular cartilage using Darcy’s equation to depict the synovial fluid motion through a porous cartilage matrix. The human ankle joint is assumed to be cylindrical enabling motion in the sagittal plane only. The proposed model is based on a modified Reynolds equation; its integration allows to obtain a quick assessment on the synovial pressure field showing a good agreement with those obtained numerically (Hlavacek, J Biomech 33:1415–1422, 2000). The analytical integration allows the closed form description of the synovial fluid film force and the calculation of the unsteady gap thickness.
Resumo:
Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.
Resumo:
A contribution is presented, intended to provide theoretical foundations for the ongoing efforts to employ global instability theory for the analysis of the classic boundary-layer flow, and address the associated issue of appropriate inflow/outflow boundary conditions to close the PDE-based global eigenvalue problem in open flows. Starting from a theoretically clean and numerically simple application, in which results are also known analytically and thus serve as a guidance for the assessment of the performance of the numerical methods employed herein, a sequence of issues is systematically built into the target application, until we arrive at one representative of open systems whose instability is presently addressed by global linear theory applied to open flows, the latter application being neither tractable theoretically nor straightforward to solve by numerical means. Experience gained along the way is documented. It regards quantification of the depar- ture of the numerical solution from the analytical one in the simple problem, the generation of numerical boundary layers at artificially truncated boundaries, no matter how far the latter are placed from the region of highest flow gradients and, ultimately the impracti- cally large number of (direct and adjoint) modes necessary to project an arbitrary initial perturbation and follow its temporal evolution by a global analysis approach, a finding which may question the purported robustness reported in the literature of the recovery of optimal perturbations as part of global analyses yielding under-resolved eigenspectra.
Resumo:
Los terremotos constituyen una de las más importantes fuentes productoras de cargas dinámicas que actúan sobre las estructuras y sus cimentaciones. Cuando se produce un terremoto la energía liberada genera movimientos del terreno en forma de ondas sísmicas que pueden provocar asientos en las cimentaciones de los edificios, empujes sobre los muros de contención, vuelco de las estructuras y el suelo puede licuar perdiendo su capacidad de soporte. Los efectos de los terremotos en estructuras constituyen unos de los aspectos que involucran por su condición de interacción sueloestructura, disciplinas diversas como el Análisis Estructural, la Mecánica de Suelo y la Ingeniería Sísmica. Uno de los aspectos que han sido poco estudiados en el cálculo de estructuras sometidas a la acciones de los terremotos son los efectos del comportamiento no lineal del suelo y de los movimientos que pueden producirse bajo la acción de cargas sísmicas, tales como posibles despegues y deslizamientos. En esta Tesis se estudian primero los empujes sísmicos y posibles deslizamientos de muros de contención y se comparan las predicciones de distintos tipos de cálculos: métodos pseudo-estáticos como el de Mononobe-Okabe (1929) con la contribución de Whitman-Liao (1985), y formulaciones analíticas como la desarrollada por Veletsos y Younan (1994). En segundo lugar se estudia el efecto del comportamiento no lineal del terreno en las rigideces de una losa de cimentación superficial y circular, como la correspondiente a la chimenea de una Central Térmica o al edificio del reactor de una Central Nuclear, considerando su variación con frecuencia y con el nivel de cargas. Finalmente se estudian los posibles deslizamientos y separación de las losas de estas dos estructuras bajo la acción de terremotos, siguiendo la formulación propuesta por Wolf (1988). Para estos estudios se han desarrollado una serie de programas específicos (MUROSIS, VELETSOS, INTESES y SEPARSE) cuyos listados y detalles se incluyen en los Apéndices. En el capítulo 6 se incluyen las conclusiones resultantes de estos estudios y recomendaciones para futuras investigaciones. ABSTRACT Earthquakes constitute one of the most important sources of dynamic loads that acting on structures and foundations. When an earthquake occurs the liberated energy generates seismic waves that can give rise to structural vibrations, settlements of the foundations of buildings, pressures on retaining walls, and possible sliding, uplifting or even overturning of structures. The soil can also liquefy losing its capacity of support The study of the effects of earthquakes on structures involve the use of diverse disciplines such as Structural Analysis, Soil Mechanics and Earthquake Engineering. Some aspects that have been the subject of limited research in relation to the behavior of structures subjected to earthquakes are the effects of nonlinear soil behavior and geometric nonlinearities such as sliding and uplifting of foundations. This Thesis starts with the study of the seismic pressures and potential displacements of retaining walls comparing the predictions of two types of formulations and assessing their range of applicability and limitations: pseudo-static methods as proposed by Mononobe-Okabe (1929), with the contribution of Whitman-Liao (1985), and analytical formulations as the one developed by Veletsos and Younan (1994) for rigid walls. The Thesis deals next with the effects of nonlinear soil behavior on the dynamic stiffness of circular mat foundations like the chimney of a Thermal Power Station or the reactor building of a Nuclear Power Plant, as a function of frequency and level of forces. Finally the seismic response of these two structures accounting for the potential sliding and uplifting of the foundation under a given earthquake are studied, following an approach suggested by Wolf (1988). In order to carry out these studies a number of special purposes computer programs were developed (MUROSIS, VELETSOS, INTESES and SEPARSE). The listing and details of these programs are included in the appendices. The conclusions derived from these studies and recommendations for future work are presented in Chapter 6.
Resumo:
The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.
Resumo:
This paper proposes a repairability index for damage assessment in reinforced concrete structural members. The procedure discussed in this paper differs from the standard methods in two aspects: the structural and damage analyses are coupled and it is based on the concepts of fracture and continuum damage mechanics. The relationship between the repairability index and the well-known Park and Ang index is shown in some particular cases.
Resumo:
An analytical solution of the two body problem perturbed by a constant tangential acceleration is derived with the aid of perturbation theory. The solution, which is valid for circular and elliptic orbits with generic eccentricity, describes the instantaneous time variation of all orbital elements. A comparison with high-accuracy numerical results shows that the analytical method can be effectively applied to multiple-revolution low-thrust orbit transfer around planets and in interplanetary space with negligible error.
Resumo:
The full text of this article is available in the PDF provided.
Resumo:
The analytical solution to the one-dimensional absorption–conduction heat transfer problem inside a single glass pane is presented, which correctly takes into account all the relevant physical phenomena: the appearance of multiple reflections, the spectral distribution of solar radiation, the spectral dependence of optical properties, the presence of possible coatings, the non-uniform nature of radiation absorption, and the diffusion of heat by conduction across the glass pane. Additionally to the well established and known direct absorptance αe, the derived solution introduces a new spectral quantity called direct absorptance moment βe, that indicates where in the glass pane is the absorption of radiation actually taking place. The theoretical and numerical comparison of the derived solution with existing approximate thermal models for the absorption–conduction problem reveals that the latter ones work best for low-absorbing uncoated single glass panes, something not necessarily fulfilled by modern glazings.
Resumo:
An analytical method for evaluating the uncertainty of the performance of active antenna arrays in the whole spatial spectrum is presented. Since array processing algorithms based on spatial reference are widely used to track moving targets, it is essential to be aware of the impact of the uncertainty sources on the antenna response. Furthermore, the estimation of the direction of arrival (DOA) depends on the array uncertainty. The aim of the uncertainties analysis is to provide an exhaustive characterization of the behavior of the active antenna array associated with its main uncertainty sources. The result of this analysis helps to select the proper calibration technique to be implemented. An illustrative example for a triangular antenna array used for satellite tracking is presented showing the suitability of the proposed method to carry out an efficient characterization of an active antenna array.
Resumo:
Corrosion of a reinforcement bar leads to expansive pressure on the surrounding concrete that provokes internal cracking and, eventually, spalling and delamination. Here, an embedded cohesive crack 2D finite element is applied for simulating the cracking process. In addition, four simplified analytical models are introduced for comparative purposes. Under some assumptions about rust properties, corrosion rate, and particularly, the accommodation of oxide products within the open cracks generated in the process, the proposed FE model is able to estimate time to surface cracking quite accurately. Moreover, emerging cracking patterns are in reasonably good agreement with expectations. As a practical case, a prototype application of the model to an actual bridge deck is reported.
Resumo:
Time series are proficiently converted into graphs via the horizontal visibility (HV) algorithm, which prompts interest in its capability for capturing the nature of different classes of series in a network context. We have recently shown [B. Luque et al., PLoS ONE 6, 9 (2011)] that dynamical systems can be studied from a novel perspective via the use of this method. Specifically, the period-doubling and band-splitting attractor cascades that characterize unimodal maps transform into families of graphs that turn out to be independent of map nonlinearity or other particulars. Here, we provide an in depth description of the HV treatment of the Feigenbaum scenario, together with analytical derivations that relate to the degree distributions, mean distances, clustering coefficients, etc., associated to the bifurcation cascades and their accumulation points. We describe how the resultant families of graphs can be framed into a renormalization group scheme in which fixed-point graphs reveal their scaling properties. These fixed points are then re-derived from an entropy optimization process defined for the graph sets, confirming a suggested connection between renormalization group and entropy optimization. Finally, we provide analytical and numerical results for the graph entropy and show that it emulates the Lyapunov exponent of the map independently of its sign.