881 resultados para Fixed Point Index
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
We study the renormalization group flow of the average action of the stochastic Navier-Stokes equation with power-law forcing. Using Galilean invariance, we introduce a nonperturbative approximation adapted to the zero-frequency sector of the theory in the parametric range of the Hölder exponent 4−2 ɛ of the forcing where real-space local interactions are relevant. In any spatial dimension d, we observe the convergence of the resulting renormalization group flow to a unique fixed point which yields a kinetic energy spectrum scaling in agreement with canonical dimension analysis. Kolmogorov's −5/3 law is, thus, recovered for ɛ=2 as also predicted by perturbative renormalization. At variance with the perturbative prediction, the −5/3 law emerges in the presence of a saturation in the ɛ dependence of the scaling dimension of the eddy diffusivity at ɛ=3/2 when, according to perturbative renormalization, the velocity field becomes infrared relevant.
Resumo:
The development of new-generation intelligent vehicle technologies will lead to a better level of road safety and CO2 emission reductions. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) infrastructure sensors and 2) floating vehicles. The former consists of a set of fixed point detectors installed in the roads, and the latter consists of the use of mobile probe vehicles as mobile sensors. However, both systems still have some deficiencies. The infrastructure sensors retrieve information fromstatic points of the road, which are spaced, in some cases, kilometers apart. This means that the picture of the actual traffic situation is not a real one. This deficiency is corrected by floating cars, which retrieve dynamic information on the traffic situation. Unfortunately, the number of floating data vehicles currently available is too small and insufficient to give a complete picture of the road traffic. In this paper, we present a floating car data (FCD) augmentation system that combines information fromfloating data vehicles and infrastructure sensors, and that, by using neural networks, is capable of incrementing the amount of FCD with virtual information. This system has been implemented and tested on actual roads, and the results show little difference between the data supplied by the floating vehicles and the virtual vehicles.
Resumo:
Time series are proficiently converted into graphs via the horizontal visibility (HV) algorithm, which prompts interest in its capability for capturing the nature of different classes of series in a network context. We have recently shown [B. Luque et al., PLoS ONE 6, 9 (2011)] that dynamical systems can be studied from a novel perspective via the use of this method. Specifically, the period-doubling and band-splitting attractor cascades that characterize unimodal maps transform into families of graphs that turn out to be independent of map nonlinearity or other particulars. Here, we provide an in depth description of the HV treatment of the Feigenbaum scenario, together with analytical derivations that relate to the degree distributions, mean distances, clustering coefficients, etc., associated to the bifurcation cascades and their accumulation points. We describe how the resultant families of graphs can be framed into a renormalization group scheme in which fixed-point graphs reveal their scaling properties. These fixed points are then re-derived from an entropy optimization process defined for the graph sets, confirming a suggested connection between renormalization group and entropy optimization. Finally, we provide analytical and numerical results for the graph entropy and show that it emulates the Lyapunov exponent of the map independently of its sign.
Resumo:
A hard-in-amplitude transition to chaos in a class of dissipative flows of broad applicability is presented. For positive values of a parameter F, no matter how small, a fully developed chaotic attractor exists within some domain of additional parameters, whereas no chaotic behavior exists for F < 0. As F is made positive, an unstable fixed point reaches an invariant plane to enter a phase half-space of physical solutions; the ghosts of a line of fixed points and a rich heteroclinic structure existing at F = 0 make the limits t --* +oc, F ~ +0 non-commuting, and allow an exact description of the chaotic flow. The formal structure of flows that exhibit the transition is determined. A subclass of such flows (coupled oscillators in near-resonance at any 2 : q frequency ratio, with F representing linear excitation of the first oscillator) is fully analysed
Resumo:
Four-dimensional flow in the phase space of three amplitudes of circularly polarized Alfven waves and one relative phase, resulting from a resonant three-wave truncation of the derivative nonlinear Schrödinger equation, has been analyzed; wave 1 is linearly unstable with growth rate , and waves 2 and 3 are stable with damping 2 and 3, respectively. The dependence of gross dynamical features on the damping model as characterized by the relation between damping and wave-vector ratios, 2 /3, k2 /k3, and the polarization of the waves, is discussed; two damping models, Landau k and resistive k2, are studied in depth. Very complex dynamics, such as multiple blue sky catastrophes and chaotic attractors arising from Feigenbaum sequences, and explosive bifurcations involving Intermittency-I chaos, are shown to be associated with the existence and loss of stability of certain fixed point P of the flow. Independently of the damping model, P may only exist as against flow contraction just requiring.In the case of right-hand RH polarization, point P may exist for all models other than Landau damping; for the resistive model, P may exist for RH polarization only if 2+3/2.
Resumo:
Nonlinearly coupled, damped oscillators at 1:1 frequency ratio, one oscillator being driven coherently for efficient excitation, are exemplified by a spherical swing with some phase-mismatch between drive and response. For certain damping range, excitation is found to succeed if it lags behind, but to produce a chaotic attractor if it leads the response. Although a period-doubhng sequence, for damping increasing, leads to the attractor, this is actually born as a hard (as regards amplitude) bifurcation at a zero growth-rate parametric line; as damping decreases, an unstable fixed point crosses an invariant plane to enter as saddle-focus a phase-space domain of physical solutions. A second hard bifurcation occurs at the zero mismatch line, the saddle-focus leaving that domain. Times on the attractor diverge when approaching either fine, leading to exactly one-dimensional and noninvertible limit maps, which are analytically determined.
Resumo:
The coherent three-wave interaction, with linear growth in the higher frequency wave and damping in the two other waves, is reconsidered; for equal dampings, the resulting three-dimensional (3-D) flow of a relative phase and just two amplitudes behaved chaotically, no matter how small the growth of the unstable wave. The general case of different dampings is studied here to test whether, and how, that hard scenario for chaos is preserved in passing from 3-D to four-dimensional flows. It is found that the wave with higher damping is partially slaved to the other damped wave; this retains a feature of the original problem an invariant surface that meets an unstable fixed point, at zero growth rate! that gave rise to the chaotic attractor and determined its structure, and suggests that the sudden transition to chaos should appear in more complex wave interactions.
Resumo:
In this paper we develop new techniques for revealing geometrical structures in phase space that are valid for aperiodically time dependent dynamical systems, which we refer to as Lagrangian descriptors. These quantities are based on the integration, for a finite time, along trajectories of an intrinsic bounded, positive geometrical and/or physical property of the trajectory itself. We discuss a general methodology for constructing Lagrangian descriptors, and we discuss a “heuristic argument” that explains why this method is successful for revealing geometrical structures in the phase space of a dynamical system. We support this argument by explicit calculations on a benchmark problem having a hyperbolic fixed point with stable and unstable manifolds that are known analytically. Several other benchmark examples are considered that allow us the assess the performance of Lagrangian descriptors in revealing invariant tori and regions of shear. Throughout the paper “side-by-side” comparisons of the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (“time averages”) are carried out and discussed. In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We also perform computations for an explicitly three dimensional, aperiodically time-dependent vector field and an aperiodically time dependent vector field defined as a data set. Comparisons with FTLEs and time averages for these examples are also carried out, with similar conclusions as for the benchmark examples.
Resumo:
The type-I intermittency route to (or out of) chaos is investigated within the horizontal visibility (HV) graph theory. For that purpose, we address the trajectories generated by unimodal maps close to an inverse tangent bifurcation and construct their associatedHVgraphs.We showhowthe alternation of laminar episodes and chaotic bursts imprints a fingerprint in the resulting graph structure. Accordingly, we derive a phenomenological theory that predicts quantitative values for several network parameters. In particular, we predict that the characteristic power-law scaling of the mean length of laminar trend sizes is fully inherited by the variance of the graph degree distribution, in good agreement with the numerics. We also report numerical evidence on how the characteristic power-law scaling of the Lyapunov exponent as a function of the distance to the tangent bifurcation is inherited in the graph by an analogous scaling of block entropy functionals defined on the graph. Furthermore, we are able to recast the full set of HV graphs generated by intermittent dynamics into a renormalization-group framework, where the fixed points of its graph-theoretical renormalization-group flow account for the different types of dynamics.We also establish that the nontrivial fixed point of this flow coincides with the tangency condition and that the corresponding invariant graph exhibits extremal entropic properties.
Resumo:
En esta tesis se aborda el estudio del proceso de isomerización del sistema molecular LiNC/LiCN tanto aislado como en presencia de un pulso láser aplicando la teoría del estado de transición (TST). Esta teoría tiene como pilar fundamental el hecho de que el conocimiento de la dinámica en las proximidades de un punto de silla de la superficie de energía potencial permite determinar los parámetros cinéticos de la reacción objeto de estudio. Históricamente, existen dos formulaciones de la teoría del estado de transición, la versión termodinámica de Eyring (Eyr38) y la visión dinámica de Wigner (Wig38). Ésta última ha sufrido recientemente un amplio desarrollo, paralelo a los avances en sistemas dinámicos que ha dado lugar a una formulación geométrica en el espacio de fases que sirve como base al trabajo desarrollado en esta tesis. Nos hemos centrado en abordar el problema desde una visión fundamentalmente práctica, ya que la teoría del estado de transición presenta una desventaja: su elevado coste computacional y de tiempo de cálculo. Dos han sido los principales objetivos de este trabajo. El primero de ellos ha sido sentar las bases teóricas y computacionales de un algoritmo eficiente que permita obtener las magnitudes fundamentales de la TST. Así, hemos adaptado con éxito un algoritmo computacional desarrollado en el ámbito de la mecánica celeste (Jor99), obteniendo un método rápido y eficiente para la obtención de los objetos geométricos que rigen la dinámica en el espacio de fases y que ha permitido calcular magnitudes cinéticas tales como el flujo reactivo, la densidad de estados de reactivos y productos y en última instancia la constante de velocidad. Dichos cálculos han sido comparados con resultados estadísticos (presentados en (Mül07)) lo cual nos ha permitido demostrar la eficacia del método empleado. El segundo objetivo de esta tesis, ha sido la evaluación de la influencia de los parámetros de un pulso electromagnético sobre la dinámica de reacción. Para ello se ha generalizado la metodología de obtención de la forma normal del hamiltoniano cuando el sistema químico es alterado mediante una perturbación temporal periódica. En este caso el punto fijo inestable en cuya vecindad se calculan los objetos geométricos de interés para la aplicación de la TST, se transforma en una órbita periódica del mismo periodo que la perturbación. Esto ha permitido la simulación de la reactividad en presencia de un pulso láser. Conocer el efecto de esta perturbación posibilita el control de la reactividad química. Además de obtener los objetos geométricos que rigen la dinámica en una cierta vecindad de la órbita periódica y que son la clave de la TST, se ha estudiado el efecto de los parámetros del pulso sobre la reactividad en el espacio de fases global así como sobre el flujo reactivo que atraviesa la superficie divisoria que separa reactivos de productos. Así, se ha puesto de manifiesto, que la amplitud del pulso es el parámetro más influyente sobre la reactividad química, pudiendo producir la aparición de flujos reactivos a energías inferiores a las de aparición del sistema aislado y el aumento del flujo reactivo a valores constantes de energía inicial. ABSTRACT We have studied the isomerization reaction LiNC/LiCN isolated and perturbed by a laser pulse. Transition State theory (TST) is the main tool we have used. The basis of this theory is knowing the dynamics close to a fixed point of the potential energy surface. It is possible to calculate kinetic magnitudes by knowing the dynamics in a neighbourhood of the fixed point. TST was first formulated in the 30's and there were 2 points of view, one thermodynamical by Eyring (Eyr38) and another dynamical one by Wigner (Wig38). The latter one has grown lately due to the growth of the dynamical systems leading to a geometrical view of the TST. This is the basis of the work shown in this thesis. As the TST has one main handicap: the high computational cost, one of the main goals of this work is to find an efficient method. We have adapted a methodology developed in the field of celestial mechanics (Jor99). The result: an efficient, fast and accurate algorithm that allows us to obtain the geometric objects that lead the dynamics close to the fixed point. Flux across the dividing surface, density of states and reaction rate coefficient have been calculated and compared with previous statistical results, (Mül07), leading to the conclusion that the method is accurate and good enough. We have widen the methodology to include a time dependent perturbation. If the perturbation is periodic in time, the fixed point becomes a periodic orbit whose period is the same as the period of the perturbation. This way we have been able to simulate the isomerization reaction when the system has been perturbed by a laser pulse. By knowing the effect of that perturbation we will be able to control the chemical reactivity. We have also studied the effect of the parameters on the global phase space dynamics and on the flux across the dividing surface. It has been prove that amplitude is the most influent parameter on the reaction dynamics. Increasing amplitude leads to greater fluxes and to some flux at energies it would not if the systems would not have been perturbed.
Resumo:
A novel class of graphs, here named quasiperiodic, are const ructed via application of the Horizontal Visibility algorithm to the time series generated along the quasiperiodic route to chaos. We show how the hierarchy of mode-locked regions represented by the Far ey tree is inherited by their associated graphs. We are able to establish, via Renormalization Group (RG) theory, the architecture of the quasiperiodic graphs produced by irrational winding numbers with pure periodic continued fraction. And finally, we demonstrate that the RG fixed-point degree distributions are recovered via optimization of a suitably defined graph entropy
Resumo:
A través de los años las estructuras de hormigón armado han ido aumentando su cuota de mercado, sustituyendo a las estructuras de fábrica de piedra o ladrillo y restándole participación a las estructuras metálicas. Uno de los primeros problemas que surgieron al ejecutar las estructuras de hormigón armado, era cómo conectar una fase de una estructura de este tipo a una fase posterior o a una modificación posterior. Hasta los años 80-90 las conexiones de una fase de una estructura de hormigón armado, con otra posterior se hacían dejando en la primera fase placas de acero con garrotas embebidas en el hormigón fresco o barras grifadas recubiertas de poliestireno expandido. Una vez endurecido el hormigón se podían conectar nuevas barras, para la siguiente fase mediante soldadura a la placa de la superficie o enderezando las barras grifadas, para embeberlas en el hormigón fresco de la fase siguiente. Estos sistemas requerían conocer la existencia y alcance de la fase posterior antes de hormigonar la fase previa. Además requerían un replanteo muy exacto y complejo de los elementos de conexión. Otro problema existente en las estructuras de hormigón era la adherencia de un hormigón fresco a un hormigón endurecido previamente, ya que la superficie de contacto de ambos hormigones suponía un punto débil, con una adherencia baja. A partir de los años 80, la industria química de la construcción experimentó un gran avance en el desarrollo de productos capaces de generar una buena adherencia sobre el hormigón endurecido. Este avance tecnológico tenía aplicación tanto en la adherencia del hormigón fresco sobre el hormigón endurecido, como en la adherencia de barras post-instaladas en agujeros de hormigón endurecido. Este sistema se denominó “anclajes adherentes de barras de acero en hormigón endurecido”. La forma genérica de ejecutarlos es hacer una perforación cilíndrica en el soporte de hormigón, con una herramienta especifica como un taladro, limpiar la perforación, llenarla del material adherente y finalmente introducir la barra de acero. Los anclajes adherentes se dividen en anclajes cementosos y anclajes químicos, siendo estos últimos los más habituales, fiables, resistentes y fáciles de ejecutar. El uso del anclaje adherente de barras de acero en hormigón endurecido se ha extendido por todo el espectro productivo, siendo muy habitual tanto en construcción de obras de hormigón armado de obra civil y edificación, como en obras industriales, instalaciones o fijación de elementos. La ejecución de un anclaje de una barra de acero en hormigón endurecido depende de numerosas variables, que en su conjunto, o de forma aislada pueden afectar de forma notable a la resistencia del anclaje. Nos referimos a variables de los anclajes, que a menudo no se consideran tales como la dirección de la perforación, la máquina de perforación y el útil de perforación utilizado, la diferencia de diámetros entre el diámetro del taladro y la barra, el tipo de material de anclaje, la limpieza del taladro, la humedad del soporte, la altura del taladro, etc. La utilización en los últimos años de los hormigones Autocompactables, añade una variable adicional, que hasta ahora apenas ha sido estudiada. En línea con lo apuntado, la presente tesis doctoral tiene como objetivo principal el estudio de las condiciones de ejecución en la resistencia de los anclajes en hormigón convencional y autocompactable. Esta investigación se centra principalmente en la evaluación de la influencia de una serie de variables sobre la resistencia de los anclajes, tanto en hormigón convencional como en un hormigón autocompactable. Para este estudio ha sido necesaria la fabricación de dos soportes de hormigón sobre los cuales desarrollar los ensayos. Uno de los bloques se ha fabricado con hormigón convencional y el otro con hormigón autocompactable. En cada pieza de hormigón se han realizado 174 anclajes con barras de acero, variando los parámetros a estudiar, para obtener resultados de todas las variables consideradas. Los ensayos a realizar en ambos bloques son exactamente iguales, para poder comparar la diferencia entre un anclaje en un soporte de hormigón con vibrado convencional (HVC) y un hormigón autocompactante (HAC). De cada tipo de ensayo deseado se harán dos repeticiones en la misma pieza. El ensayo de arrancamiento de las barras se realizara con un gato hidráulico hueco, con un sistema de instrumentación de lectura y registro de datos en tiempo real. El análisis de los resultados, realizado con una potente herramienta estadística, ha permitido determinar y evaluar numéricamente la influencia de los variables consideradas en la resistencia de los anclajes realizados. Así mismo ha permitido diferenciar los resultados obtenidos en los hormigones convencionales y autocompactantes, tanto desde el punto de vista de la resistencia mecánica, como de las deformaciones sufridas en el arrancamiento. Se define la resistencia mecánica de un anclaje, como la fuerza desarrollada en la dirección de la barra, para hacer su arrancamiento del soporte. De la misma forma se considera desplazamiento, a la separación entre un punto fijo de la barra y otro del soporte, en la dirección de la barra. Dichos puntos se determinan cuando se ha terminado el anclaje, en la intersección de la superficie plana del soporte, con la barra. Las conclusiones obtenidas han permitido establecer qué variables afectan a la ejecución de los anclajes y en qué cuantía lo hacen, así como determinar la diferencia entre los anclajes en hormigón vibrado convencional y hormigón autocompactante, con resultados muy interesantes, que permiten valorar la influencia de dichas variables. Dentro de las conclusiones podemos destacar tres grupos, que denominaremos como de alta influencia, baja influencia y sin influencia. En todos los casos hay que hacer el estudio en términos de carga y de desplazamiento. Podemos considerar como de alta influencia, en términos de carga las variables de máquina de perforación y el material de anclaje. En términos de desplazamiento podemos considerar de alta influencia además de la máquina de perforación y el material de anclaje, el diámetro del taladro, así como la limpieza y humedad del soporte. Podemos considerar de baja influencia, en términos de carga las variables de tipo de hormigón, dirección de perforación, limpieza y humedad del soporte. En términos de desplazamiento podemos considerar de baja influencia el tipo de hormigón y la dirección de perforación. Podemos considerar en el apartado de “sin influencia”, en términos de carga las variables de diámetro de perforación y altura del taladro. En términos de desplazamiento podemos considerar como “sin influencia” la variable de altura del taladro. Podemos afirmar que las diferencias entre los valores de carga aumentan de forma muy importante en términos de desplazamiento. ABSTRACT Over the years the concrete structures have been increasing their market share, replacing the masonry structures of stone or brick and subtracting as well the participation of the metallic structures. One of the first problems encountered in the implementing of the reinforced concrete structures was connecting a phase structure of this type at a later stage or a subsequent amendment. Until the 80s and 90s the connections of one phase of a reinforced concrete structure with a subsequent first phase were done by leaving the steel plates embedded in the fresh concrete using hooks or bent bars coated with expanded polystyrene. Once the concrete had hardened new bars could be connected to the next stage by welding them to the surface plate or by straightening the bent bars to embed them in the fresh concrete of the next phase. These systems required a previous knowledge of the existence and scope of the subsequent phase before concreting the previous one. They also required a very precise and complex rethinking of the connecting elements. Another existing problem in the concrete structures was the adhesion of a fresh concrete to a previously hardened concrete, since the contact surface of both concretes leaded to a weak point with low adherence. Since the 80s, the chemicals construction industry experienced a breakthrough in the development of products that generate a good grip on the concrete. This technological advance had its application both in the grip on one hardened fresh concrete and in the adhesion of bar post-installed in holes of hardened concrete. This system was termed as adherent anchors of steel bars in hardened concrete. The generic way of executing this system is by firstly drilling a cylindrical hole in the concrete support using a specific tool such as a drill. Then, cleaning the bore and filling it with bonding material to lastly, introduce the steel bar. These adherent anchors are divided into cement and chemical anchors, the latter being the most common, reliable, durable and easy to run. The use of adhesive anchor of steel bars in hardened concrete has spread across the production spectrum turning itself into a very common solution in both construction of reinforced concrete civil engineering and construction, and industrial works, installations and fixing elements as well. The execution of an anchor of a steel bar in hardened concrete depends on numerous variables which together or as a single solution may significantly affect the strength of the anchor. We are referring to variables of anchors which are often not considered, such as the diameter difference between the rod and the bore, the drilling system, cleansing of the drill, type of anchor material, the moisture of the substrate, the direction of the drill, the drill’s height, etc. During recent years, the emergence of self-compacting concrete adds an additional variable which has hardly been studied so far. According to mentioned this thesis aims to study the main performance conditions in the resistance of conventional and self-compacting concrete anchors. This research is primarily focused on the evaluation of the influence of several variables on the strength of the anchoring, both in conventional concrete and self-compacting concrete. In order to complete this study it has been required the manufacture of two concrete supports on which to develop the tests. One of the blocks has been manufactured with conventional concrete and the other with self-compacting concrete. A total of 174 steel bar anchors have been made in each one of the concrete pieces varying the studied parameters in order to obtain results for all variables considered. The tests to be performed on both blocks are exactly the same in order to compare the difference between an anchor on a stand with vibrated concrete (HVC) and a self-compacting concrete (SCC). Each type of test required two repetitions in the same piece. The pulling test of the bars was made with a hollow jack and with an instrumentation system for reading and recording data in real time. The use of a powerful statistical tool in the analysis of the results allowed to numerically determine and evaluate the influence of the variables considered in the resistance of the anchors made. It has likewise enabled to differentiate the results obtained in the self-compacting and conventional concretes, from both the outlook of the mechanical strength and the deformations undergone by uprooting. The mechanical strength of an anchor is defined as the strength undergone in a direction of the bar to uproot it from the support. Likewise, the movement is defined as the separation between a fixed point of the bar and a fixed point from the support considering the direction of the bar. These points are only determined once the anchor is finished, with the bar, at the intersection in the flat surface of the support. The conclusions obtained have established which variables affect the execution of the anchors and in what quantity. They have also permitted to determine the difference between the anchors in vibrated concrete and selfcompacting concrete with very interesting results that also allow to assess the influence of these mentioned variables. Three groups are highlighted among the conclusions called high influence, low influence and no influence. In every case is necessary to perform the study in terms of loading and movement. In terms of loading, there are considered as high influence two variables: drilling machinery and anchorage material. In terms of movement, there are considered as high influence the drilling diameter and the cleaning and moisture of the support, besides the drilling machinery and the anchorage material. Variables such as type of concrete, drilling direction and cleaning and moisture of the support are considered of low influence in terms of load. In terms of movement, the type of concrete and the direction of the drilling are considered variables of low influence. Within the no influence section in terms of loading, there are included the diameter of the drilling and the height of the drill. In terms of loading, the height of the drill is considered as a no influence variable. We can affirm that the differences among the loading values increase significantly in terms of movement.
Resumo:
In recent years a great number of high speed railway bridges have been constructed within the Spanish borders. Due to the demanding high speed trains route's geometrical requirements, bridges frequently show remarkable lengths. This fact is the main reason why railway bridges are overall longer than roadway bridges. In the same line, it is also worth highlighting the importance of high speed trains braking forces compared to vehicles. While vehicles braking forces can be tackled easily, the railway braking forces demand the existence of a fixed-point. It is generally located at abutments where the no-displacements requirement can be more easily achieved. In some other cases the fixed-point is placed in one of the interior columns. As a consequence of these bridges' length and the need of a fixed-point, temperature, creep and shrinkage strains lead to fairly significant deck displacements, which become greater with the distance to the fixed-point. These displacements need to be accommodated by the piers and bearings deformation. Regular elastomeric bearings are not able to allow such displacements and therefore are not suitable for this task. For this reason, the use of sliding PTFE POT bearings has been an extensive practice mainly because they permit sliding with low friction. This is not the only reason of the extensive use of these bearings to high-speed railways bridges. The value of the vertical loads at each bent is significantly higher than in roadway bridges. This is so mainly because the live loads due to trains traffic are much greater than vehicles. Thus, gravel rails foundation represents a non-negligible permanent load at all. All this together increases the value of vertical loads to be withstood. This high vertical load demand discards the use of conventional bearings for excessive compressions. The PTFE POT bearings' higher technology allows to accommodate this level of compression thanks to their design. The previously explained high-speed railway bridge configuration leads to a key fact regarding longitudinal horizontal loads (such as breaking forces) which is the transmission of these loads entirely to the fixed-point alone. Piers do not receive these longitudinal horizontal loads since PTFE POT bearings displayed are longitudinally free-sliding. This means that longitudinal horizontal actions on top of piers will not be forces but imposed displacements. This feature leads to the need to approach these piers design in a different manner that when piers are elastically linked to superstructure, which is the case of elastomeric bearings. In response to the previous, the main goal of this Thesis is to present a Design Method for columns displaying either longitudinally fixed POT bearings or longitudinally free PTFE POT bearings within bridges with fixed-point deck configuration, applicable to railway and road vehicles bridges. The method was developed with the intention to account for all major parameters that play a role in these columns behavior. The long process that has finally led to the method's formulation is rooted in the understanding of these column's behavior. All the assumptions made to elaborate the formulations contained in this method have been made in benefit of conservatives results. The singularity of the analysis of columns with this configuration is due to a combination of different aspects. One of the first steps of this work was to study they of these design aspects and understand the role each plays in the column's response. Among these aspects, special attention was dedicated to the column's own creep due to permanent actions such us rheological deck displacements, and also to the longitudinally guided PTFE POT bearings implications in the design of the column. The result of this study is the Design Method presented in this Thesis, that allows to work out a compliant vertical reinforcement distribution along the column. The design of horizontal reinforcement due to shear forces is not addressed in this Thesis. The method's formulations are meant to be applicable to the greatest number of cases, leaving to the engineer judgement many of the different parameters values. In this regard, this method is a helpful tool for a wide range of cases. The widespread use of European standards in the more recent years, in particular the so-called Eurocodes, has been one of the reasons why this Thesis has been developed in accordance with Eurocodes. Same trend has been followed for the bearings design implications, which are covered by the rather recent European code EN-1337. One of the most relevant aspects that this work has taken from the Eurocodes is the non-linear calculations security format. The biaxial bending simplified approach that shows the Design Method presented in this work also lies on Eurocodes recommendations. The columns under analysis are governed by a set of dimensionless parameters that are presented in this work. The identification of these parameters is a helpful for design purposes for two columns with identical dimensionless parameters may be designed together. The first group of these parameters have to do with the cross-sectional behavior, represented in the bending-curvature diagrams. A second group of parameters define the columns response. Thanks to this identification of the governing dimensionless parameters, it has been possible what has been named as Dimensionless Design Curves, which basically allows to obtain in a reduced time a preliminary vertical reinforcement column distribution. These curves are of little use nowadays, firstly because each family of curves refer to specific values of many different parameters and secondly because the use of computers allows for extremely quick and accurate calculations.
Resumo:
El ruido derivado de las actividades de ocio es uno de los contaminantes acústicos más importantes en la sociedad actual. Este foco de ruido no sólo se encuentra presente en los entorno de los bares, pubs o discotecas, sino también en las zonas donde se desarrollan los eventos festivos de la ciudad. Sin embargo, son pocos los estudios y actuaciones llevadas a cabo desde el punto de vista ambiental que permitan conocer las principales características del ruido de ocio, los métodos de medida o los parámetros más adecuados. Por este motivo, se han fijado en estos aspectos los objetivos de esta tesis doctoral. Para el estudio del ruido de ocio nocturno se ha desarrollado y evaluado un método de medida, basado en la realización de medidas binaurales durante un recorrido y en medidas de larga duración en puntos fijos de las distintas zonas de ocio de Madrid y Cuenca. A partir de los resultados obtenidos, se ha realizado una caracterización acústica del ruido ocio, se ha definido un procedimiento de actuación en el que se incluye un modelo de predicción, y se ha desarrollado un modelo clasificador capaz de diferenciar el ruido de ocio del ruido de tráfico rodado. En el caso de los eventos de ocio también se ha desarrollado un método de evaluación y medida adaptado a sus características, con el que se han medido los eventos más importantes acontecidos durante un año en Madrid y Cuenca, del análisis de estas medidas se ha determinado qué eventos son los más ruidosos, así como sus características principales y las diferencias entre ellos. Este estudio pretende servir de apoyo en la gestión del ruido ambiental derivado de las actividades de ocio, presentando datos cualitativos y cuantitativos de este tipo de ruido en sus distintas facetas y aportando nuevas herramientas que faciliten su gestión. ABSTRACT Leisure noise is one of the most important environmental pollutants nowadays. This noise is not only nearby leisure venues where people go at night, but also around leisure events like popular parties or concerts placed in urban areas. There are few studies and actions about leisure noise from the environmental noise point of view, and consequently, there are no information about the leisure noise characteristics, the most appropriate measurement methods or the most interesting parameters to evaluate this kind of noise. Consequently, these are the aims of this PhD thesis. About the noise around leisure venues, a measurement method has been defined by using the Soundwalker technique. Besides, fixed point measurements have been done in different leisure areas. With the results of these measurements, a noise characterization has been done and a guide has been developed to act in case of leisure noise problems, including a method to predict the leisure noise in this kind of areas. As well as that, a classifying model has been done to differenciate leisure noise and road traffic noise. A measurement procedure has been developed in the leisure events case. Following this procedure, the most important events happened during a year in two different cities have been measured. With these results, the noisiest events, the most important characteristics of each kind of event and the differences between them have been pointed out. This study tries to support the environmental noise management in the leisure noise case. It provides cualitative and quantitative data of leisure noise levels in different situations; it also defines an action protocol to resolve leisure noise problems and it defines new tools to manage this kind of noise.