166 resultados para Jose Luis Rodríguez Zapatero
Resumo:
Se trata de ejercicios de aplicación del Método Básico de la Rigidez, orientado fundamentalmente a la asignatura ESTRUCTURAS I de la ETS de Arquitectura de Madrid, UPM.
Resumo:
Desde la creación del Virreinato del Perú, en el siglo XVI, los arcos, bóvedas y cúpulas se acostumbraban a levantar con piedra y fábrica. Sin embargo estas tierras eran sacudidas periodicamente por terremotos, produciendo el colapso de la mayoría de estas edificaciones. Para el siglo XVII los alarifes ya habían experimentado diversas maneras de levantar bóvedas, sin haberse encontrado una respuesta razonable en términos de tiempo, economía y estabilidad frente a los sismos. En medio de este panorama se produjo la introducción de las bóvedas encamonadas a mediados del siglo XVII, consolidandose en el resto de la centuria hasta el punto de terminar convirtiéndose en un recurso tradicional y de estimada elaboración dentro de la arquitectura virreinal peruana. Las bóvedas encamonadas se realizaban con tablas de madera (camones) que se solapaban entre sí para formar arcos (cerchas), los cuales definían la forma que tendrían las bóvedas, y eran estabilizados lateralmente mediante correas. Sobre los arcos y correas se colocaba un cerramiento que podía ser un entablado, unos listones de madera o simplemente un tendido a base de cañas. En la mayoría de casos se finalizaba con un recubrimiento aislante de barro por el extradós y otro decorativo de yeso por el intradós. Precisamente estas bóvedas constituyen el objeto de la presente tesis, específicamente en su devenir histórico entre los siglos XVII y XVIII en el ámbito territorial del Virreinato del Perú, partiendo del examen de los tratados de arquitectura coetáneos y del estudio de las bóvedas de madera en España, para finalizar con el análisis de las características geométricas y constructivas que lograron definir en ellas los alarifes peruanos. Since the creation of the Viceroyalty of Peru, in the sixteenth century, arches, vaults and domes were accustomed to build with stone and masonry. However, these lands were periodically shaken by earthquakes, causing the collapse of most of these buildings. For the seventeenth century the master masons had already experienced several ways to build vaults, without having found a reasonable response in terms of time, economy and stability against earthquakes. Into this context the master carpenters introduced the wooden vaults since seventeenth century, and this constructive system was consolidated around the rest of the century to the end point of becoming a traditional and estimated resource of the Peruvian colonial architecture. The wooden vaults were made with timber planks (camones) that overlapped each other to form arches (cerchas), which defined the shape of the vaults, and were stabilized laterally by purlins. Above the arches and purlins placed planks, wooden strips or just cane. In most cases ended with a mud plaster insulating the extrados and a decorative gypsum plaster on the intrados. Precisely these vaults are the subject of this thesis, specifically in its historical way between the seventeenth and eighteenth centuries in the territory of the Viceroyalty of Peru. Since an examination of the architectural treatises and the Spanish wooden vaults, and concluding with the analysis of the geometric and constructive system that Peruvian builders were able to define on them.
Resumo:
El texto describe la forma de ensamblar la matriz de equilibrio de estructuras de barras articuladas en los nodos, en 3D. Se explica previamente los conceptos teóricos y luego se implementa dentro del software comercial Maple.
Resumo:
La investigación realizada en este trabajo de tesis se ha centrado en la caracterización y optimización del sistema Fe/Gd/Fe y en el estudio de su efecto en el transporte dependiente de espín y en la transferencia de espín (STT) en dispositivos magnéticos. El fenómeno de STT, uno de los grandes descubrimientos de la espintrónica, se basa en la transferencia de momento angular de una corriente polarizada de espín a la imanación local de un material magnético. Este efecto se traduce en que una corriente polarizada de espín puede provocar variaciones en la imanación del material sin necesidad de campo magnético aplicado. Este fenómeno necesita una densidad de corriente muy alta, de manera que sus efectos solo se observan en dispositivos de tamaño nanométrico a partir de la llamada densidad de corriente crítica. El STT tiene un gran potencial tecnológico para distintas aplicaciones, como emisores de radiofrecuencia para comunicación in-chip o memorias magnéticas alternativas, en que se podría leer y escribir la información únicamente mediante corriente, sin necesidad de aplicar campo magnético ni utilizar bobinas de detección. Desde el punto de vista de este tipo de aplicaciones hay un gran interés en disminuir la densidad de corriente crítica a la que empieza a observarse el efecto. Sin embargo, hay otro tipo de dispositivos en que el STT supone un problema o factor limitante. Este es el caso de las cabezas lectoras de ordenador, en las que a partir de la densidad de corriente crítica aparece ruido e inestabilidad adicional en la señal inducidos por STT, lo que limita su sensibilidad. Para este tipo de aplicación, se desea por tanto que la corriente crítica a partir de la cual aparece ruido e inestabilidad adicional en la señal sea tan grande como sea posible. El Gd (y especialmente el sistema Fe/Gd/Fe) tiene unas características muy particulares con potencial para afectar varias propiedades relacionadas con la densidad de corriente crítica de un dispositivo de STT. Por este motivo, resulta interesante introducir la tricapa Fe/Gd/Fe en la capa libre de este tipo de dispositivos y estudiar cómo afecta a su estabilidad. Para ello, una primera parte del trabajo se ha centrado en la exhaustiva caracterización del sistema Fe/Gd/Fe y la optimización de sus propiedades de cara a su introducción en la capa libre de dispositivos de STT. Por otro lado, la intención final es alterar o controlar el efecto de transferencia de espín en un dispositivo afectando lo menos posible al resto de las propiedades intrínsecas de su funcionamiento (por ejemplo, al valor de su magnetorresistencia). Por tanto, ha sido necesario estudiar los efectos del sistema Fe/Gd/Fe en el transporte de espín y determinar la manera de introducir la tricapa en el dispositivo optimizando el resto de sus propiedades o afectándolas lo menos posible. Finalmente, hemos introducido el sistema Fe/Gd/Fe en la capa libre de nanodispositivos y hemos estudiado su efecto en la corriente crítica de inestabilidad por STT. Los resultados muestran que estas tricapas Fe/Gd/Fe pueden suponer una solución potencial para los problemas de estabilidad de algunos nanodispositivos magnéticos como las cabezas lectoras magnéticas.
Resumo:
In this work we propose a method for cleaving silicon-based photonic chips by using a laser based micromachining system, consisting of a ND:YVO4laser emitting at 355 nm in nanosecond pulse regime and a micropositioning system. The laser makes grooved marks placed at the desired locations and directions where cleaves have to be initiated, and after several processing steps, a crack appears and propagate along the crystallographic planes of the silicon wafer. This allows cleavage of the chips automatically and with high positioning accuracy, and provides polished vertical facets with better quality than the obtained with other cleaving process, which eases the optical characterization of photonic devices. This method has been found to be particularly useful when cleaving small-sized chips, where manual cleaving is hard to perform; and also for polymeric waveguides, whose facets get damaged or even destroyed with polishing or manual cleaving processing. Influence of length of the grooved line and speed of processing is studied for a variety of silicon chips. An application for cleaving and characterizing sol–gel waveguides is presented. The total amount of light coupled is higher than when using any other procedure.
Resumo:
Fastener holes in aeronautical structures are typical sources of fatigue cracks due to their induced local stress concentration. A very efficient solution to this problem is to establish compressive residual stresses around the fastener holes that retard the fatigue crack nucleation and its subsequent local propagation. Previous work done on the subject of the application of LSP treatment on thin, open-hole specimens [1] has proven that the LSP effect on fatigue life of treated specimens can be detrimental, if the process is not properly optimized. In fact, it was shown that the capability of the LSP to introduce compressive residual stresses around fastener holes in thin-walled structures representative of typical aircraft constructions was not superior to the performance of conventional techniques, such as cold-working.
Resumo:
Continuous and long-pulse lasers have been used for the forming of metal sheets in macroscopic mechanical applications. However, for the manufacturing of micro-electromechanical systems (MEMS), the use of ns laser pulses provides a suitable parameter matching over an important range of sheet components that, preserving the short interaction time scale required for the predominantly mechanical (shock) induction of deformation residual stresses, allows for the successful processing of components in a medium range of miniaturization without appreciable thermal deformation.. In the present paper, the physics of laser shock microforming and the influence of the different experimental parameters on the net bending angle are presented.
Resumo:
Laser Welding (LW) is more often used in manufacturing due to its advantages, such as accurate control, good repeatability, less heat input, opportunities for joining of special materials, high speed, capability to join small dimension parts etc. LW is dedicated to robotized manufacturing, and the fabrication cells are using various level of flexibility, from specialized robots to very flexible setups. This paper features several LW applications using two industrially-scaled manufacturing cells at UPM Laser Centre (CLUPM) of Polytechnical University of Madrid (Universidad Politécnica de Madrid). The one dedicated to Remote Laser Welding (RLW) of thin sheets for automotive and other sectors uses a CO2 laser of 3500 W. The second has a high flexibility, is based on a 6-axis ABB robot and a Nd:YAG laser of 3300 W, and is meant for various laser processing methods, including welding. After a short description of each cell, several LW applications experimented at CLUPM and recently implemented in industry are briefly presented: RLW of automotive coated sheets, LW of high strength automotive sheets, LW vs. laser hybrid welding (LHW) of Double Phase steel thin sheets, and LHW of thin sheets of stainless steel and carbon steel (dissimilar joints). The main technological issues overcame and the critical process parameters are pointed out. Conclusions about achievements and trends are provided.
Resumo:
The paper presents a consistent set of results showing the ability of Laser Shock Processing (LSP) in modifying the overall properties of the Friction Stir Welded (FSW) joints made of AA 2024-T351. Based on laser beam intensities above 109 W/cm2 with pulse energies of several Joules and pulses durations of nanoseconds, LSP is able of inducing a compression residual stress field, improving the wear and fatigue resistance by slowing crack propagation and stress corrosion cracking, but also improving the overall behaviour of the structure. After the FSW and LSP procedures are briefly presented, the results of micro-hardness measurements and of transverse tensile tests, together with the corrosion resistance of the native joints vs. LSP treated are discussed. The ability of LSP to generate compressive residual stresses and to improve the behaviour of the FSW joints is underscored.
Resumo:
Software architectural evaluation is a key discipline used to identify, at early stages of a real-time system (RTS) development, the problems that may arise during its operation. Typical mechanisms supporting concurrency, such as semaphores, mutexes or monitors, usually lead to concurrency problems in execution time that are difficult to be identified, reproduced and solved. For this reason, it is crucial to understand the root causes of these problems and to provide support to identify and mitigate them at early stages of the system lifecycle. This paper aims to present the results of a research work oriented to the development of the tool called ‘Deadlock Risk Evaluation of Architectural Models’ (DREAM) to assess deadlock risk in architectural models of an RTS. A particular architectural style, Pipelines of Processes in Object-Oriented Architectures–UML (PPOOA) was used to represent platform-independent models of an RTS architecture supported by the PPOOA –Visio tool. We validated the technique presented here by using several case studies related to RTS development and comparing our results with those from other deadlock detection approaches, supported by different tools. Here we present two of these case studies, one related to avionics and the other to planetary exploration robotics. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
We determined the distribution of lipids (n-alkanes and n-alkan-2-ones) in present-day peat-formingplants in the RoñanzasBog in northernSpain. Consistent with the observation of others, most Sphagnum (moss) species alkanes maximized at C23, whereas the other plants maximized at higher molecular weight (C27 to C31). We show for the first time that plants other than seagrass and Sphagnum moss contain n-alkan-2-ones. Almost all the species analysed showed an n-alkan-2-one distribution between C21 and C31 with an odd/even predominance, maximizing at C27 or C29, except ferns, which maximized at lower molecular weight (C21–C23). We also observed that microbial degradation can be a major contributor to the n-alkan-2-one distribution in sediments as opposed to a direct input of ketones from plants
Resumo:
In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
Informes oficiales de países en desarrollo señalan, en general, significativas deficiencias en el tratamiento de la información en las pequeñas y medianas empresas (Pyme). Contar con sistemas de información automatizados (SI) es ineludible, pero es más importante que sean exitosos, para lo cual la satisfacción del usuario final es el factor clave que llevará a obtener los beneficios esperados. Los niveles gerenciales y los profesionales de informática deben estar familiarizados con los principales factores relacionados para asegurar su adecuado tratamiento. Este estudio evaluó la satisfacción del usuario final y varios factores críticos de éxito relacionados en una muestra de empresas industriales medianas (Pymi). Para ello, se utilizó uno de los modelos de éxito más reconocidos por la comunidad investigadora del área. Realizados los análisis cuantitativo/cualitativo y comparados los resultados se concluye que el principal factor relacionado con la satisfacción del usuario final es la calidad de la información, lo cual puede ser suficiente para considerar como exitoso un SI; con esto los demás factores quedan en segundo lugar. El beneficio práctico de esta investigación es reflexionar sobre estos factores, contribuir a reforzar la efectividad y calidad de los procesos de desarrollo o adquisición de un SI y reducir su índice de fracasos.
Resumo:
The exchange interaction of Gd adjacent to Fe has been characterized by transport measurements on a double spin valve with a Fe/Gd/Fe trilayer as the middle layer. Our measurements show that the ferromagnetism of the Gd is enhanced by the presence of the Fe, and it remains ferromagnetic over its Curie temperature up to a thickness no smaller than 1 nm adjacent to the Fe. This thickness is more than double what has been reported before. Additionally, the saturation magnetization of the thin Gd layer sandwiched in Fe was found to be half of its bulk value. This reduced magnetization does not seem to be related to the proximity of Fe but rather to the incomplete saturation of Gd even for very high fields