401 resultados para Brake Fade.
Resumo:
La producción de huevos disminuye a medida que las ponedoras envejecen. Un método para contrarrestar, al menos parcialmente, esta evolución natural del rendimiento productivo es la muda inducida. El rendimiento productivo de las gallinas tras la muda se debe a un proceso de rejuvenecimiento fisiológico de las aves, relacionado con la regresión del ovario y del oviducto durante la muda, siendo la pérdida de peso corporal decisiva para la regresión de estos órganos (Brake y Thaxton , 1979). En este trabajo estudiamos los efectos de 3 dietas distintas, utilizadas para inducir la muda (salvado de trigo, cebada y pienso comercial suministrado de forma restringida), sobre la pérdida de peso vivo, sobre la regresión del ovario y del oviducto, y sobre los rendimientos productivos posteriores, en gallinas ponedoras de 2 estirpes comerciales, alojadas con dos densidades diferentes (4 y 6 gallinas, por jaula). Se trabajó con 120 gallinas de cada estirpe, sacrificándose 36 animales (18+18) para poder evaluar la regresión del ovario-oviducto. La menor pérdida de peso se produjo con el salvado y con la cebada, aunque la intensidad de puesta (IP), en las 6 primeras semanas postmuda no varió entre tratamientos, excepto en gallinas ligeras mudadas con salvado de trigo, que alcanzaron una IP significativamente menor . Tampoco tuvo efecto significativo el número de gallinas por jaula sobre la IP, ni sobre la pérdida de peso.
Resumo:
Durante las últimas décadas se observa una tendencia sostenida al crecimiento en las dimensiones de los grandes buques portacontenedores, que produce, que las infraestructuras portuarias y otras destinadas al tráfico de contenedores deban adaptarse para poder brindar los servicios correspondientes y mantenerse competitivas con otras para no perder el mercado. Esta situación implica importantes inversiones y modificaciones en los sistemas de transporte de contenedores por el gran volumen de carga que se debe mover en un corto periodo de tiempo, lo que genera la necesidad de tomar previsiones relacionadas con la probable evolución a futuro de las dimensiones que alcanzarán los grandes buques portacontenedores. En relación a los aspectos citados surge la inquietud de determinar los condicionantes futuros del crecimiento de los grandes buques portacontenedores, con una visión totalizadora de todos los factores que incidirán en los próximos años, ya sea como un freno o un impulso a la tendencia que se verifica en el pasado y en el presente. En consideración a que el tema a tratar y resolver se encuentra en el futuro, con un horizonte de predicción de veinte años, se diseña y se aplica una metodología prospectiva, que permite alcanzar conclusiones con mayor grado de objetividad sobre probables escenarios futuros. La metodología prospectiva diseñada, conjuga distintas herramientas metodológicas, cualitativas, semi-cuantitativas y cuantitativas que se validan entre sí. Sobre la base del pasado y el presente, las herramientas cuantitativas permiten encontrar relaciones entre variables y hacer proyecciones, sin embargo, estas metodologías pierden validez más allá de los tres a cuatro años, por los vertiginosos y dinámicos cambios que se producen actualmente, en las áreas política, social y económica. Las metodologías semi-cuantitativas y cualitativas, empleadas en forma conjunta e integradas, permiten el análisis de circunstancias del pasado y del presente, obteniendo resultados cuantitativos que se pueden proyectar hacia un futuro cercano, los que integrados en estudios cualitativos proporcionan resultados a largo plazo, facilitando considerar variables cualitativas como la creciente preocupación por la preservación del medio ambiente y la piratería. La presente tesis, tiene como objetivo principal “identificar los condicionantes futuros del crecimiento de los grandes buques portacontenedores y determinar sus escenarios”. Para lo cual, la misma se estructura en fases consecutivas y que se retroalimentan continuamente. Las tres primeras fases son un enfoque sobre el pasado y el presente, que establece el problema a resolver. Se estudian los antecedentes y el estado del conocimiento en relación a los factores y circunstancias que motivaron y facilitaron la tendencia al crecimiento de los grandes buques. También se estudia el estado del conocimiento de las metodologías para predecir el futuro y se diseña de una metodología prospectiva. La cuarta fase, denominada Resultados, se desarrolla en distintas etapas, fundamentadas en las fases anteriores, con el fin de resolver el problema dando respuestas a las preguntas que se formularon para alcanzar el objetivo fijado. En el proceso de esta fase, con el objeto de predecir probables futuros, se aplica la metodología prospectiva diseñada, que contempla el análisis del pasado y el presente, que determina los factores cuya influencia provocó el crecimiento en dimensiones de los grandes buques hasta la actualidad, y que constituye la base para emplear los métodos prospectivos que permiten determinar qué factores condicionarán en el futuro la evolución de los grandes buques. El probable escenario futuro formado por los factores determinados por el criterio experto, es validado mediante un modelo cuantitativo dinámico, que además de obtener el probable escenario futuro basado en las tendencias de comportamiento hasta el presente de los factores determinantes considerados, permite estudiar distintos probables escenarios futuros en función de considerar un cambio en la tendencia futura de los factores determinantes. El análisis del pasado indica que la tendencia al crecimiento de los grandes buques portacontenedores hasta el presente, se ha motivado por un crecimiento económico mundial que se tradujo en un aumento del comercio internacional, particularmente entre los países de Asia, con Europa y Estados Unidos. Esta tendencia se ha visto favorecida por el factor globalización y la acelerada evolución tecnológica que ha permitido superar los obstáculos que se presentaron. Es de destacar que aún en periodos de crisis económicas, con pronósticos de contracciones en el comercio, en los últimos años continuó la tendencia al crecimiento en dimensiones, en busca de una economía de escala para el transporte marítimo de contenedores, en las rutas transoceánicas. La investigación de la evolución de los grandes buques portacontenedores en el futuro, se efectúa mediante el empleo de una metodología prospectiva en la que el criterio experto se valida con un método cuantitativo dinámico, y además se fundamenta en una solida base pre-prospectiva. La metodología diseñada permite evaluar con un alto grado de objetividad cuales serán los condicionantes que incidirán en el crecimiento en tamaño de los grandes buques portacontenedores en el escenario con mayor probabilidad de acontecer en los próximos veinte años (2032), y también en otros escenarios que podrían presentarse en el caso de que los factores modifiquen su tendencia o bien se produzcan hechos aleatorios. El resultado se sintetiza en que la tendencia al crecimiento de los grandes buques portacontenedores en los próximos 20 años se verá condicionada por factores en relación a los conceptos de oferta (los que facilitan u obstaculizan la tendencia), demanda (los que motivan o impulsan la tendencia) y factores externos (los que desestabilizan el equilibrio entre oferta y demanda). La tendencia al crecimiento de los grandes buques portacontenedores se verá obstaculizada / limitada principalmente por factores relacionados a las infraestructuras, resultando los pasos y/o canales vinculados a las rutas marítimas, los limitantes futuros al crecimiento en dimensiones de los grandes buques portacontenedores; y la interacción buque / infraestructura (grúas) un factor que tenderá a obstaculizar esta tendencia de los grandes portacontenedores. El desarrollo económico mundial que estimula el comercio internacional y los factores precio del petróleo y condicionantes medioambientales impulsarán la tendencia al crecimiento de los grandes buques portacontenedores. Recent years have seen a sustained tendency towards the growth in the dimensions of large container ships. This has meant that port and other infrastructure used for container traffic has had to be adapted in order to provide the required services and to maintain a competitive position, so as not to lose market share. This situation implies the need for major investments in modifications to the container transport system, on account of the large volume of traffic to be handled in a short period of time. This in turn has generated a need to make provision for the probable future evolution of the ultimate dimensions that will be reached by large container ships. Such considerations give rise to the question of what are the future determinants for the growth of large container ships, requiring an overall vision of all the factors that will apply in future years, whether as a brake on or an incentive to the growth tendency which has been seen in the past and present In view of the fact that the theme to be dealt with and resolved relates to the future, with a forecasting horizon of some 20 years, a foresight methodology has been designed and applied so as to enable conclusions about probable future scenarios to be reached with a greater degree of objectivity. The designed methodology contains different methodological tools, both qualitative, semi-quantitative and quantitative, which are internally consistent. On the basis of past and present observations, the quantitative elements enable relationships to be established and forecasts to be made. Nevertheless such an approach loses validity more than three or four years into the future, on account of the very rapid and dynamic changes which may be seen at present in political, social and economic spheres. The semi-quantitative and qualitative methodologies are used coherently together and allow the analysis of past and present conditions, thus obtaining quantitative results which for short-term projections, which when integrated with the qualitative studies provide results for the long-term, facilitating the consideration of qualitative variables such as the increasing importance of environmental protection and the impact of piracy. The principal objective of the present thesis is "to identify the future conditions affecting the growth of large container ships and to determine possible scenarios". The thesis is structured in consecutive and related phases. The first three phases focus on the past and present in order to determine the problem to be resolved. The background is studied in order to establish the state of knowledge about the factors and circumstances which have motivated and facilitated the growth tendency for large container ships and the methodologies that have been used. In this way a specific foresight methodology is designed. The fourth phase, Results, is developed in distinct stages based on the previous phases, so as to resolve the problem posed and responding to the questions that arise. In this way the determined objective is reached. The fourth phase sees the application of the methodology that has been designed in order to predict posible futures. This includes analysis of the past and present factors which have caused the growth in the dimensions of large container ships up to the present. These provide the basis on which to apply the foresight methods which enable the future factors which will condition the development of such large container ships. The probable future scenarios are made up of the factors identified by expert judgement (using the Delphi technique) and validated by means of a dynamic quantitative model. This model both identifies the probable future scenarios based on past and present factors and enables the different future scenarios to be analysed as a function of future changes in the conditioning factors. Analysis of the past shows that the growth tendency up to the present for large container ships has been motivated by the growth of the world economy and the consequent increased international trade, especially between the countries of Asia with Europe and the United States. This tendency has been favoured by the trend towards globalization and by the rapid technical evolution in ship design, which has allowed the obstacles encountered to be overcome. It should be noted that even in periods of economic crisis, with an expectation for reduced trade, as experienced in recent years, the tendency towards increased ship dimensions has continued in search of economies of scale for the maritime transport of containers on transoceanic routes. The present investigation of the future evolution of large container ships has been done using a foresight methodology in which the expert judgement is validated by a dynamic quantitative methodology, founded on a firm pre-foresight analysis. The methodology that has been designed permits the evaluation, with a high degree of objectivity, of the future factors that will affect the growth of large container ships for the most probable scenario expected in the next 20 years (up to 2032). The evaluation applies also to other scenarios which may arise, in the event that their component factors are modified or indeed in the light of random events. In summary, the conclusión is that the tendency for growth in large container ships in the future 20 years will be determined by: factors related to supply, which slow or halt the tendency; factors related to demand, which encourage the tendency and finally, external factors which interrupt the equilibrium between supply and demand. The tendency for increasing growth in large container ships will be limited or even halted by factors related to infrastructure, including the natural and man-made straits and canals used by maritime transport. In addition the infrastructure required to serve such vessels both in port (including cranes and other equipment) and related transport, will tend to slow the growth tendency. The factors which will continue to encourage the tendency towards the growth of large container ships include world economic development, which stimulates international trade, and an increasing emphasis on environmental aspects.
Resumo:
The aim of this work was twofold: on the one hand, to describe a comparative study of two intelligent control techniques-fuzzy and intelligent proportional-integral (PI) control, and on the other, to try to provide an answer to an as yet unsolved topic in the automotive sector-stop-and-go control in urban environments at very low speeds. Commercial vehicles exhibit nonlinear behavior and therefore constitute an excellent platform on which to check the controllers. This paper describes the design, tuning, and evaluation of the controllers performing actions on the longitudinal control of a car-the throttle and brake pedals-to accomplish stop-and-go manoeuvres. They are tested in two steps. First, a simulation model is used to design and tune the controllers, and second, these controllers are implemented in the commercial vehicle-which has automatic driving capabilities-to check their behavior. A stop-and-go manoeuvre is implemented with the two control techniques using two cooperating vehicles.
Resumo:
There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results.
Resumo:
Four longitudinal control techniques are compared: a classical Proportional-Integral (PI) control; an advanced technique-called the i-PI-that adds an intelligent component to the PI; a fuzzy controller based on human experience; and an adaptive-network-based fuzzy inference system. The controllers were designed to tackle one of the challenging topics as yet unsolved by the automotive sector: managing autonomously a gasoline-propelled vehicle at very low speeds. The dynamics involved are highly nonlinear and constitute an excellent test-bed for newly designed controllers. A Citroën C3 Pluriel car was modified to permit autonomous action on the accelerator and the brake pedals-i.e., longitudinal control. The controllers were tested in two stages. First, the vehicle was modeled to check the controllers' feasibility. Second, the controllers were then implemented in the Citroën, and their behavior under the same conditions on an identical real circuit was compared.
Resumo:
n recent years, the development of advanced driver assistance systems (ADAS) – mainly based on lidar and cameras – has considerably improved the safety of driving in urban environments. These systems provide warning signals for the driver in the case that any unexpected traffic circumstance is detected. The next step is to develop systems capable not only of warning the driver but also of taking over control of the car to avoid a potential collision. In the present communication, a system capable of autonomously avoiding collisions in traffic jam situations is presented. First, a perception system was developed for urban situations—in which not only vehicles have to be considered, but also pedestrians and other non-motor-vehicles (NMV). It comprises a differential global positioning system (DGPS) and wireless communication for vehicle detection, and an ultrasound sensor for NMV detection. Then, the vehicle's actuators – brake and throttle pedals – were modified to permit autonomous control. Finally, a fuzzy logic controller was implemented capable of analyzing the information provided by the perception system and of sending control commands to the vehicle's actuators so as to avoid accidents. The feasibility of the integrated system was tested by mounting it in a commercial vehicle, with the results being encouraging.
Resumo:
The networks need to provide higher speeds than those offered today. For it, considering that in the spectrum radio technologies is the scarcest resource in the development of these technologies and the new developments is essential to maximize the performance of bits per hertz transmitted. Long Term Evolution optimize spectral efficiency modulations with new air interface, and more advanced algorithms radius. These capabilities is the fact that LTE is an IPbased technology that enables end-to-end offer high transmission rates per user and very low latency, ie delay in the response times of the network around only 10 milliseconds, so you can offer any realtime application. LTE is the latest standard in mobile network technology and 3GPP ensure competitiveness in the future, may be considered a technology bridge between 3G networks - current 3.5G and future 4G networks, which are expected to reach speeds of up to 1G . LTE operators provide a simplified architecture but both robust, supporting services on IP technology. The objectives to be achieved through its implementation are ambitious, first users have a wide range of added services like capabilities that currently enjoys with residential broadband access at competitive prices, while the operator will have a network fully IP-based environment, reducing the complexity and cost of the same, which will give operators the opportunity to migrate to LTE directly. A major advantage of LTE is its ability to fuse with existing networks, ensuring interconnection with the same, increasing his current coverage and allowing a data connection established by a user in the environment continue when fade the coverage LTE. Moreover, the operator has the advantage of deploying network gradually, starting initially at areas of high demand for broadband services and expand progressively in line with this. RESUMEN. Las redes necesitan proporcionar velocidades mayores a las ofertadas a día de hoy. Para ello, teniendo en cuenta que en tecnologías radio el espectro es el recurso más escaso, en la evolución de estas tecnologías y en los nuevos desarrollos es esencial maximizar el rendimiento de bits por hercio transmitido. Long Term Evolution optimiza la eficiencia espectral con nuevas modulaciones en la interfaz aire, así como los algoritmos radio más avanzado. A estas capacidades se suma el hecho de que LTE es una tecnología basada en IP de extremo a extremo que permite ofrecer altas velocidades de transmisión por usuario y latencias muy bajas, es decir, retardos en los tiempos de respuesta de la red en torno a sólo 10 milisegundos, por lo que permite ofrecer cualquier tipo de aplicación en tiempo real. LTE es el último estándar en tecnología de redes móviles y asegurará la competitividad de 3GPP en el futuro, pudiendo ser considerada una tecnología puente entre las redes 3G – 3.5G actuales y las futuras redes 4G, de las que se esperan alcanzar velocidades de hasta 1G. LTE proporcionará a las operadoras una arquitectura simplificada pero robusta a la vez, soportando servicios sobre tecnología IP. Los objetivos que se persiguen con su implantación son ambiciosos, por una parte los usuarios dispondrá de una amplia oferta de servicios añadidos con capacidades similares a las que disfruta actualmente con accesos a banda ancha residencial y a precios competitivos, mientras que el operador dispondrá de una red basada en entorno totalmente IP, reduciendo la complejidad y el costo de la misma, lo que dará a las operadoras la oportunidad de migrar a LTE directamente. Una gran ventaja de LTE es su capacidad para fusionarse con las redes existentes, asegurando la interconexión con las mismas, aumentando su actual cobertura y permitiendo que una conexión de datos establecida por un usuario en el entorno LTE continúe cuando la cobertura LTE se desvanezca. Por otra parte el operador tiene la ventaja de desplegar la red LTE de forma gradual, comenzando inicialmente por las áreas de gran demanda de servicios de banda ancha y ampliarla progresivamente en función de ésta.
Resumo:
Tropospheric phenomena such as clouds and mainly rain cause higher attenuation at Ka-band than at lower frequencies. In this collaborative paper, the main results of four long-term Ka-band propagation campaigns are presented. The experiments are carried out in Ottawa, Canada (satellite Anik F2); Aveiro, Portugal; Madrid, Spain; and Toulouse, France (satellite HotBird 6 in the last three cases) and have been running since 2004 in Aveiro, 2006 in Ottawa and Madrid, and 2008 in Toulouse. After a brief introduction of the experiments, rain rate and excess attenuation results are discussed, first for a common two-year measurement period and later for the whole database available. Seasonal attenuation statistics for Madrid, Ottawa and Aveiro are compared. Finally, fade duration and fade slope statistics derived at three locations are presented and discussed.
Resumo:
La rápida adopción de dispositivos electrónicos en el automóvil, ha contribuido a mejorar en gran medida la seguridad y el confort. Desde principios del siglo 20, la investigación en sistemas de seguridad activa ha originado el desarrollo de tecnologías como ABS (Antilock Brake System), TCS (Traction Control System) y ESP (Electronic Stability Program). El coste de despliegue de estos sistemas es crítico: históricamente, sólo han sido ampliamente adoptados cuando el precio de los sensores y la electrónica necesarios para su construcción ha caído hasta un valor marginal. Hoy en día, los vehículos a motor incluyen un amplio rango de sensores para implementar las funciones de seguridad. La incorporación de sistemas que detecten la presencia de agua, hielo o nieve en la vía es un factor adicional que podría ayudar a evitar situaciones de riesgo. Existen algunas implementaciones prácticas capaces de detectar carreteras mojadas, heladas y nevadas, aunque con limitaciones importantes. En esta tesis doctoral, se propone una aproximación novedosa al problema, basada en el análisis del ruido de rodadura generado durante la conducción. El ruido de rodadura es capturado y preprocesado. Después es analizado utilizando un clasificador basado en máquinas de vectores soporte (SVM), con el fin de generar una estimación del estado del firme. Todas estas operaciones se realizan en el propio vehículo. El sistema propuesto se ha desarrollado y evaluado utilizando Matlabr, mostrando tasas de aciertos de más del 90%. Se ha realizado una implementación en tiempo real, utilizando un prototipo basado en DSP. Después se han introducido varias optimizaciones para permitir que el sistema sea realizable usando un microcontrolador de propósito general. Finalmente se ha realizado una implementación hardware basada en un microcontrolador, integrándola estrechamente con las ECU del vehículo, pudiendo obtener datos capturados por los sensores del mismo y enviar las estimaciones del estado del firme. El sistema resultante ha sido patentado, y destaca por su elevada tasa de aciertos con un tamaño, consumo y coste reducidos. ABSTRACT Proliferation of automotive electronics, has greatly improved driving safety and comfort. Since the beginning of the 20th century, investigation in active safety systems has resulted in the development of technologies such as ABS (Antilock Brake System), TCS (Traction Control System) and ESP (Electronic Stability Program). Deployment cost of these systems is critical: historically, they have been widely adopted only when the price of the sensors and electronics needed to build them has been cut to a marginal value. Nowadays, motor vehicles include a wide range of sensors to implement the safety functions. Incorporation of systems capable of detecting water, ice or snow on the road is an additional factor that could help avoiding risky situations. There are some implementations capable of detecting wet, icy and snowy roads, although with important limitations. In this PhD Thesis, a novel approach is proposed, based on the analysis of the tyre/road noise radiated during driving. Tyre/road noise is captured and pre-processed. Then it is analysed using a Support Vector Machine (SVM) based classifier, to output an estimation of the road status. All these operations are performed on-board. Proposed system is developed and evaluated using Matlabr, showing success rates greater than 90%. A real time implementation is carried out using a DSP based prototype. Several optimizations are introduced enabling the system to work using a low-cost general purpose microcontroller. Finally a microcontroller based hardware implementation is developed. This implementation is tightly integrated with the vehicle ECUs, allowing it to obtain data captured by its sensors, and to send the road status estimations. Resulting system has been patented, and is notable because of its high hit rate, small size, low power consumption and low cost.
Resumo:
Although previous studies report on the effect of street washing on ambient particulate matter levels, there is a lack of studies investigating the results of street washing on the emission strength of road dust. A sampling campaign was conducted in Madrid urban area during July 2009 where road dust samples were collected in two sites, namely Reference site (where the road surface was not washed) and Pelayo site (where street washing was performed daily during night). Following the chemical characterization of the road dust particles the emission sources were resolved by means of Positive Matrix Factorization, PMF (Multilinear Engine scripting) and the mass contribution of each source was calculated for the two sites. Mineral dust, brake wear, tire wear, carbonaceous emissions and construction dust were the main sources of road dust with mineral and construction dust being the major contributors to inhalable road dust load. To evaluate the effectiveness of street washing on the emission sources, the sources mass contributions between the two sites were compared. Although brake wear and tire wear had lower concentrations at the site where street washing was performed, these mass differences were not statistically significant and the temporal variation did not show the expected build-up after dust removal. It was concluded that the washing activities resulted merely in a road dust moistening, without effective removal and that mobilization of particles took place in a few hours between washing and sampling. The results also indicated that it is worth paying attention to the dust dispersed from the construction sites as they affect the emission strength in nearby streets.
Resumo:
La Tesis decodifica una selección de veinte proyectos representativos de Sejima-SANAA, desde su primer proyecto construido, la Casa Platform I, 1987, hasta el Centro Rolex, 2010, año en que Sejima y Nishizawa –SANAA- reciben el Premio Pritzker. De los veinte proyectos once son de Sejima: Casa Platform I, Casa Platform II, Residencia de Mujeres, Casa N, Pachinco Parlor I, Villa en el Bosque, Comisaría en Chofu, Casa Y, Apartamentos en Gifu, Edificio de equipamientos en la Expo Tokio 96, Pachinko Parlor III; y nueve de SANAA: edificio Multimedia en Oogaki, estudio de viviendas metropolitanas,Park Café en Koga, De Kunstlinie en Almere, Museo de Kanazawa, Pabellón de Toledo, Escuela de Zollverein, Casa Flor y Centro Rolex. La decodificación lee la obra de Sejima-SANAA a la inversa para ‘reconstruir’, en un ejercicio de simulación ficticia, una versión verosímil y coherente de los que podrían haber sido sus procesos proyectuales; podrían, porque los verdaderos son imposibles de dilucidar. Los que se proponen se pretenden exclusivamente verosímiles y plausibles. Con ello se pretende contribuir al entendimiento y comprensión de la arquitectura de Sejima-SANAA y, tangencialmente y en menor medida, a la teoría sobre el ejercicio proyectual arquitectónico. La decodificación se centra en dos aspectos concretos: la forma arquitectónica y el papel proyectual de la estructura portante. Ambas decodificaciones se extienden inevitablemente a otros aspectos relacionados, como, por ejemplo, la naturaleza del espacio arquitectónico. El procedimiento de investigación partió de una descripción objetiva y pormenorizada de los significantes formales y estructurales de cada proyecto desde su propia configuración física y geométrica. Esa descripción ‘objetiva’, llevada al límite, permitió que afloraran estructuras conceptuales y lógicas subyacentes de cada proyecto. Unida a interpretación crítica, –mediante su relación y confrontación con otras arquitecturas y otros modos de hacer conocidos- permitió trazar la reconstitución ficticia que persigue la decodificación. Ese trabajo se materializó en veinte ensayos críticos y se acompañó de un conjunto de otros textos sobre temas sugeridos o reclamados por el proceso de investigación. El conjunto de todos esos textos constituye el material de trabajo de la tesis. A partir de ahí, con una visión de conjunto, la tesis identifica una trayectoria de estrategias formales y una trayectoria de estrategias proyectuales relacionadas con lo portante. Juntas conforman el grueso de la tesis que se expone en los cuatro capítulos centrales. Los precede un capítulo introductorio que expone el recorrido biográfico de K. Sejima y la trayectoria profesional de Sejima-SANAA; y los siguen de unos textos transversales sobre forma, lugar y espacio. La tesis termina con una síntesis de sus conclusiones. Las estrategias formales se exponen en tres capítulos. El primero, ‘Primeras estrategias formales’ agrupa proyectos de la primera etapa de Sejima. El segundo capítulo está dedicado enteramente al proyecto de los apartamentos en Gifu, 1994-98, que según esta tesis, supuso un importante punto de inflexión en la trayectoria de Sejima; tanto el tercer capítulo lleva por nombre ‘Estrategias formales después de Gifu’ y recoge los proyectos que le siguieron. Las ‘Primeras estrategias formales’, varias y balbucientes, se mueven en general en torno a dos modos o procedimientos de composición, bien conocidos: por partes y sistemático. Éste última inicia en la trayectoria de SANAA un aspecto que va a ser relevante de aquí en adelante: entender el proyecto como propuesta genérica en la que, más allá de su realidad específica y tangible, subyace una lógica, en cada proyecto la suya, extrapolable a otros lugares, otras dimensiones, incluso otros programas: cada proyecto podría dar lugar a otros proyectos de la misma familia. La composición sistemática incluye, entre otros, la Casa Platform II, basada en la definición de un elemento constructivo, y la formulación de unas leyes de repetición y de posibles modos de agrupación. Incluye también la Residencia de Mujeres Saishunkan Seiyaku- proyecto que lanzó a Sejima a la fama internacional-, que también sería un sistema, pero distinto: basado en la repetición regular de una serie de elementos a lo largo de una directriz generando un hipotético contenedor infinito del que el proyecto sería tan solo un fragmento. La estrategia formal del edificio de Gifu ahondaría en la voluntad genérica del proyecto, adoptando la lógica de un juego. El proyecto sería una partida del juego, pero no la única posible, podrían jugarse otras. Esta hipótesis del juego está verificada en ‘El Juego de Gifu’ que - tras formular el juego identificando sus elementos (tablero y fichas), reglas y procedimientos- juega una partida: la que habría dado lugar al edificio proyectado por Sejima. Gifu extiende el concepto de ‘repetir’ un elemento constructivo a la de repetir un patrón espacial, lo que conlleva: la desvinculación entre forma y función; y un nuevo concepto de flexibilidad, que deja de referirse al uso flexible del edificio construido para pertenecer al momento proyectual en que se asignan funciones específicas a los patrones espaciales. Esta tesis propone que esa asignación de funciones sería uno de los últimos eslabones del proceso proyectual, algo opuesto a la premisa moderna de “la forma sigue a la función”. Las estrategias formales ‘Después de Gifu’ tienen también lógicas de juego, pero cada estrategia responde a un juego distinto, como dejan entrever sus nombres: ‘Tableros de Juego’, que con distintos grados de madurez estaría presente en varios proyectos; ‘Elementos de Catálogo’ en el Museo de Kanazawa; ‘Forma apriorística’, en la Casa Flor y ‘Repetición de una situación topológica’, en el Centro Rolex. Todas esas estrategias, o juegos, mantienen aspectos comunes relativos a la forma arquitectónica, precisamente los aspectos Gifu: la repetición aplicada al patrón espacial, y lo que conlleva: desvinculación entre forma y función y la nueva acepción de flexibilidad. ‘Tableros de Juego’ consiste en configurar cada sistema de proyecto (estructura, cerramientos, particiones y mobiliario) eligiendo elementos ofrecidos por una geometría de base, en cada proyecto la suya, en general reticular: intersecciones, líneas, módulos. Cada sistema se configura, en principio, sin relación de subordinación con cualquiera de los demás; cuando esa subordinación es ineludible, el juego determina que el sistema portante no puede materializar el orden geométrico de base, lo que se traduce en que no ejerce el papel dominante. Por lo tanto, ‘Tableros de Juego’ transgrede la lógica de la planta libre moderna: la estructura ni refleja ni revela el orden de base y los sistemas no respetan las relaciones de subordinación jerárquica y encadenada que aquella determinaba. Esta estrategia de ‘Tableros de juego’ deriva en soluciones y proyectos formales muy distintos: los proyectos de Oogaki y Park Café, que presentarían ‘Tableros de Juego’ incipientes; De Kunstlinie en Almere y la Escuela de Zollverein, que presentarían una consolidación de esta estrategia; y el Pabellón de Vidrio de Toledo que resultaría de la subversión de la estrategia. Este último proyecto, además, lleva el concepto de repetición más allá del elemento constructivo y del patrón espacial (que en este caso tiene forma de burbuja) parar acabar afectando a la propia experiencia del espectador, que esté donde esté, siempre tiene la sensación de estar en el mismo sitio. Esta tesis denomina a ese espacio repetitivo como ‘espacio mantra’. La estrategia ‘Elementos de Catálogo’ se ilustra con el Museo de Kanazawa. Su lógica parte de la definición de una serie de elementos, muy pocos, y se basa en el ingente número de posibles combinaciones entre sí. Gifu habría anunciado el catalogo de elementos en la caracterización de sus patrones espaciales. La estrategia ‘Forma Apriorística’ se ilustra con la Casa Flor. La decisión sobre el tipo de forma -en este caso la de una ameba- estaría al principio del proceso proyectual, lo que no quiere decir que sea una forma arbitraria: la forma de la ameba lleva implícita la repetición de un patrón espacial (el seudópodo) y una apoteosis del concepto de repetición que, alcanzando la experiencia espacial, da lugar a un espacio repetitivo o mantra. El ‘Espacio Mantra’ es uno de los leitmotivs, que se emplean como argumento en la última estrategia formal que la Tesis decodifica: el Centro Rolex. Con respecto a la estructura portante, la tesis identifica y traza una trayectoria de cinco estrategias proyectuales: preeminencia, ocultación, disolución, desaparición y desvirtuación. --Ocultación, reduce el papel dominante de la estructura. Al principio es una ocultación literal, casi un tapado de los elementos estructurales, como en Gifu; luego se hace más sofisticada, como la ocultación por camuflaje o la paradójica ocultación por multiplicación de Park Café. --La disolución merma la condición dominante de la estructura que en lugar de configurarse como sistema unitario u homogéneo se fragmenta en varios subsistemas. --La desaparición se refiere a estructuras que desaparecen como sistemas propios y autónomos, a proyectos en los que la función portante es desempeñada por otros sistemas como el de las particiones. La desaparición culmina con la Casa Flor, cuyo perímetro ejerce la función portante y además es transparente, está desmaterializado: la estructura se ha hecho invisible, ha desaparecido. --La desvirtuación se refiere a estructuras que sí se presentan como sistemas propios y autónomos, pero dejan de tener un papel preeminente por cuanto no materializan el orden de base: esta estrategia es correlativa a la estrategia formal ‘Tableros de juego’. Las conclusiones de la tesis están en la propia organización de la tesis: la identificación de las estrategias. Aún así, y como epílogos, se exponen seis. Las dos primeras subrayan el hilo conductor del trabajo realizado, que radica en la cualidad genérica de las estrategias proyectuales en Sejima-SANAA. Las cuatro siguientes dilucidan hasta qué punto hay, en sus proyectos, rasgos o significantes formales y/o estructurales que sean a su vez señales características del panorama arquitectónico contemporáneo; y plantean la pregunta estrella: ¿hay algunos que, apuntando más lejos, supongan aportaciones originales? --Como aportaciones originales la tesis destaca: la identificación entre el ideal genérico y proyecto concreto; y la propuesta de un espacio nuevo, híbrido, una suerte de estadio intermedio entre el espacio subdividido y compartimentado de la tradición y el continuo moderno. --Como síntomas de contemporaneidad se destacan: respecto de la forma, la traslación de la especificidad formal de la parte al conjunto; y respecto de la estructura, la tendencia contemporánea a hacer estructuras cada vez más ligeras y livianas, que tienden a lo evanescente. Ésta última, la tendencia al evanescencia estructural, podría tener la condición de aportación original, no en vano la desaparición de la estructura lleva la evanescencia hacia sus últimas consecuencias, y en el caso de estructuras con presencia física, hace que dejen de ser el sistema ordenador orquestador del proceso proyectual. ABSTRACT The Thesis decodes a selection of twenty representative Sejima-SANAA projects, from the first one built, the Platform I House in 1987, to the Rolex Center in 2010, year in which Sejima and Nishizawa –SANAA- received the Pritzker Prize. Eleven projects are from Sejima: Platform I, Platform II, Saishunkan Seiyaku Women´s Dormitory, N- House, Pachinco Parlor I, Villa in the Forest, Policy Box at Chofu Station, Y-House, Gifu Kitigata Apartment, World City Expo ´96 Facilities Building, Pachinko Parlor III; and nine from SANAA: Multimedia Workshop in Ogaki, Metropolitan Housing Studies, Park Café in Koga, De Kunstlinie in Almere, Kanazawa Museum, Glass Pavilion at the Toledo Museum of Art, Zollverein School, Flower House and the Rolex Center. This decoding reads the Sejima-SANAA’s projects inversely aiming ‘to reconstruct', in a fictitious simulation exercise, a likely and coherent version of what her/their projectual processes ‘could’ have been; ‘could’, because the true ones are impossible to explain. The ones proposed here pretend only to be likely and reasonable. By so doing the Thesis tries to contribute to the understanding and comprehension of Sejima-SANAA architecture and, tangentially and to a lesser extent, to the theory of architectural projects exercise. Decoding centers in two specific aspects: architectural form, and projectual role of the load bearing structure. Both decodes inevitably extend to other related aspects such as, for example, the nature of space. The research procedure begun by carrying out an objective and detailed description of the formal and structural signifiers of each project; looking at them from their physical and geometric configuration. Taken to the limit, the ‘objective’ descriptions allowed the conceptual structures and underlying logics of each project to arise. Together with critical interpretations, which related and confronted them with other architectures and well-known projectual working ways, it became possible to outline and trace the intended fictitious reconstruction decodes. The descriptive analytical work materialized in twenty critical essays, and was accompanied by a set of other essays on subjects suggested or demanded by the research process. Together, all those texts were the material basis on which thesis work was built. Looking at the whole and taking it from there, the thesis identifies two related projectual trajectories: a trajectory of formal strategies and a trajectory of strategies having to do with structural systems and components. Both, together, constitute the bulk of the thesis, as presented in the four central chapters. Preceding them there is an introductory chapter outlining the biographical path of Kazuyo Sejima and the professional trajectory of Sejima-SANAA. And following them there is another one containing transversal texts on form, place and space. The thesis ends with a synthesis on conclusions. The formal strategies are displayed in three chapters. The first one, `Early formal strategies' groups the first phase projects by Sejima. The second one, ‘Formal strategies of Gifu’s paradigm’, is entirely dedicated to the Gifu apartments project, 1994-98, which according to this thesis meant an important inflexion point in Sejima’s trajectory; so much so that the third chapter is named `Formal strategies after Gifu' and gathers the selected projects that followed it. The ‘Early formal strategies', diverse and tentative, move in general around two well-known projectual composition methods ‘composition by parts’, and ‘systematic composition’. This last one –systematic composition- begins and leads in SANAA’s trajectory an aspect which will remain relevant from here on: the understanding of the project as if it were an specific instance of a generic proposal in which -below and beyond the project tangible reality- there lays a logic that could be applicable at other places, for other dimensions, even with other programs; from each project, other projects of the same family could rise. The set of projects using this systematic composition method include, among others, the ‘Platform II House, based on the definition of a constructive element and of rules having to do with its replicas and their possible groupings. It also includes the Saishunkan Seiyaku Women Residence -project that launched Sejima to international fame- that could also be seen as a system, but of a different kind: a system based on the regular repetition of a series of elements along a directive line, thus generating a hypothetical infinite container of which the project would be only a fragment. The formal strategy of the Gifu apartments building would push further towards the generic project concept, adopting the logic of a game. The project would be a bout, a round, one play…, but not the only possible one; others could be played. The thesis confirms this game hypothesis -after having formulated `The Game of Gifu' and identified its elements (board, chips, rules and procedures)- playing the one play from which the building as projected by Sejima would have raised. Gifu extends the concept of ‘repeating a constructive element’ to that of ‘repeating a space pattern element’, and to what it implies: the decoupling of form and function, leading to a new concept of flexibility that no longer refers to the flexible use of the constructed building but to the projectual moment at which the specific functions are assigned to the space patterns. This thesis proposes that this allocation of functions would be one of the last steps in projectual process, quite opposite from the modern premise: “form follows function”. The Formal strategies after Gifu do also have a game logic; but, as their names reveal, each strategy responds to a different game: ‘Game Boards’, present with different maturity levels in several projects; ‘Elements from a Catalogue’, in the Kanazawa Museum; ‘Aprioristic Form’, in the Flower House; and ‘Repetition of a topologic situation', in the Rolex Center. All of these strategies, or games, maintain common aspects having to do with architectural form; aspects that were already present, precisely, in Gifu: repetition of space pattern units, uncoupling of form and function, and a new meaning of flexibility. -`Game Boards’ consists on setting up a base geometry -each project his, generally reticular- and give form to each project system (structure, closings, partitions and furniture) by choosing elements -intersections, lines, modules- it offers. Each project system is formed, in principle, with no subordinated relation with any of the others; when subordination is unavoidable, the game rules determine that the load bearing structural system may not be the one to materialize the base geometric order, which means that it does not exert the dominant role. Therefore, ‘Game Boards' transgresses the Modern logic, because the structure neither reflects nor reveals the base order, and because the systems do not respect any of the hierarchic and chained subordination relations that the ‘free plan’ called for. ‘Game Boards' leads to quite different solutions and formal projects: the Oogaki and Park Coffee projects show incipient Game Boards; The Almere Kunstlinie and the Zollverein School present consolidations of this strategy; and the Toledo’s Glass Pavilion results from subverting the strategy. In addition, the Toledo project takes the repetition concept beyond that of using a constructive element and a space pattern element (in this case with a bubble form) to end up affecting the personal experience of the spectator, who, wherever he is, feels to always be in the same place. This thesis denominates that repetitive space as ‘Mantra space '. -‘Elements from a Catalogue’ is shown with the Kanazawa Museum. Its logic starts from the definition of a series of elements, very few, and it is based on the huge number of possible combinations among them. The ‘Elements from a Catalogue’ approach was announced in the Gifu project when characterizing its space pattern elements. -Aprioristic Form' is illustrated by the Flower House. The decision on the type of form -in this case the form of an amoeba- would be the beginning of the projectual process, but it does not mean it is arbitrary form: the amoeba form implies repeating a space pattern (pseudopodia) and an apotheosis of the repetition concept: embracing the space experience, it gives rise to a repetitive or mantra space. ‘Mantra Space’ is one of leitmotivs used as an argument in the last formal strategy Thesis decodes: the Rolex Center. With respect to the ‘Projectual strategies of the load bearing structure’, the thesis finds and traces a trajectory of five projectual strategies: ‘preeminence, concealment, dissolution, disappearance and desvirtuación’. --Preeminence is present in Sejima’s first works in which she resorts to structures which have a dominant preeminent role in the project in so far as they impersonate the greater scale and/or materialize the base geometric order. In later works that preeminence will be inverted, the projects aiming towards its opposite: lighter, slighter, smaller structures. -Concealment reduces the dominant role of the structure. At the outset concealment is literal, almost hiding the structural elements, as in Gifu; soon it will become more sophisticated, such as the concealment by camouflage or the paradoxical concealment by multiplication in the Koga Park Café. -Dissolution diminishes the dominant condition of the structure: instead of its’ being configured as unitary or homogenous system is fragmented in several subsystems. -Disappearance talks about structures that fade away as self referred and independent systems; projects in which the load bearing function is carried out by other systems such as the set of partitions. Disappearance reaches its zenith at the Flower House, whose perimeter functions structurally being, in addition, transparent, immaterial: its structure has become invisible, has disappeared. -Desvirtuación talks about structures that do appear like independent self-systems, but which that do not longer have a preeminent paper, inasmuch as they do not materialize the base order. This strategy correlates with the ‘Game Boards’ formal strategy. The thesis conclusions are show by the organization of the thesis itself: its identification of the different strategies. Even so, as epilogues, the thesis exposes six ‘Conclusions’. The first two emphasize the leading thread of the work done, rooted in the generic quality of the Sejima-SANAA projectual strategies. The following four expound to what extent their projects show features, or formal and/or structural signifiers, which also are or can be read as characteristic signals of the contemporary architectonic panorama, and raise the key question: aiming farther, may some of them be taken as original contributions? -As original contributions the conclusions highlight: the identification between the generic ideal and the concrete project; and the proposal of a new, hybrid space, kind of an intermediate stage between the traditional subdivided compartmented space and the continuous modern. -As symptoms of contemporaneousness: in relation to the form it highlights the transferring of the formal specificity from the part to the whole; and in relation to the structure, it underscore the contemporary tendency towards lighter and growingly slimmer structures, tending to the evanescent. This last one, the tendency towards structural evanescence, could have condition of being an original contribution, not in vain it carries the structural disappearance towards its last consequences; and in the case of structures with physical presence, it makes them to cease being the ordering system orchestrating the projectual process.
Resumo:
Several Architecture Description Languages (ADLs) are emerging as models to describe and represent system architectures. Among others, EAST-ADL language is highlighted. It represents an abstraction of embedded software systems for automobiles. Given the need to implement the EAST-ADL language, there are many modeling tools to perform this task. The scope of this thesis is a detailed comparison of three EAST-ADL editors: Papyrus, EATOP and MetaEdit +, providing a conceptual framework, describing the comparison criteria, and finally exemplifying thanks to the Brake-By-Wire use case which has been provided, and whose development is not the subject of this project. The motivation for developing this project is to provide comparison guide between these three modeling tools to facilitate developers choice when deciding the tool in which develop their work. RESUMEN. Diversos Lenguajes de Descripción de Arquitecturas (ADLs) están surgiendo como modelos para describir y representar arquitecturas de sistemas. Entre ellos es destacado el lenguaje EAST-ADL, que representa una abstracción de los sistemas de software embebido para automóviles. Ante la necesidad de implementar el lenguaje EAST-ADL, han surgido diversas herramientas de modelado que llevan a cabo esta tarea. El alcance de este proyecto consiste en una comparación detallada de tres editores EAST-ADL: Papyrus, EATOP y MetaEdit+, proporcionando un marco conceptual, describiendo los criterios de comparación y finalmente ejemplificando con el caso de uso Brake-By-Wire que nos ha sido proporcionado, y cuyo desarrollo no es sujeto de este proyecto. La motivación para desarrollar este proyecto parte de proporcionar al usuario una guía comparativa de estas tres herramientas de modelado para facilitar su elección a la hora de desarrollar su trabajo.
Resumo:
Este trabajo de investigación se ha centrado en la escalera de caracol con ojo también denominado caracol de mallorca, en sus primeras apariciones a finales del s.XV y comienzos del s.XVI durante el llamado periodo tardogótico. Este elemento aparentemente sencillo presenta, sin embargo, una gran diversidad de esquemas geométricos. Todavía a día de hoy, no hay explicación del por qué de la mayor difusión de unos con respecto a otros. El objetivo de este trabajo ha sido valorar el papel que el confort/seguridad de uso así como la economía en el trabajo –sencillez de ejecución y coste económico– tuvieron en el contexto de la difusión de este elemento arquitectónico. Este estudio ha analizado en base a estos criterios objetivos, la geometría de cada uno de los esquemas tipo que resuelven la formación del hueco central característico este tipo de escaleras, con la intención de determinar las causas objetivas que pudieron motivar la desigual expansión de los mismos. Una vez determinado el marco teórico, se han establecido dos líneas para abordar el problema. Por un lado se ha planteado el análisis de los modelos teóricos recogidos en los distintos textos de cantería escritos por autores españoles desde finales del s.XVI. Si bien en la época en que se escribieron, la arquitectura gótica como sistema había desaparecido, alguno de sus elementos más característicos, como la escalera de caracol, continuaban empleándose. Las trazas del caracol con ojo recogidas en estos textos, describen habitualmente las soluciones geométricas más frecuentes e incluso los ejemplos construidos más conocidos, los cuales fueron levantados en su mayoría en el período en el cual se desarrolla esta investigación. Por otro lado, este estudio se plantea el análisis de modelos construidos existentes en la Ciudad y Tierra de Segovia. La razón que justifica dichos límites geográficos, la encontramos en que la mayor parte del patrimonio gótico existente en este área fue construido entre finales del s.XV y comienzos del s. XVI. Ambos grupos, modelos teóricos y modelos construidos, recogen la herencia gótica heredada en la construcción de este elemento arquitectónico. Cada una de estas dos líneas establecidas han seguido una metodología específica adaptada al objeto de análisis, si bien ambas se basan en un primer análisis individual y posterior análisis comparativo de los modelos seleccionados. Estos análisis han permitido identificar los distintos esquemas empleados para resolver el hueco central, lo que ha posibilitado establecer una clasificación general de los mismos en tres grupos: caracol con ojo de solución radial, el caracol con ojo de solución no radial y el caracol con ojo de solución tangencial. A partir de los datos y resultados obtenidos del análisis comparativo de los parámetros geométricos que afectan al confort/seguridad de uso y economía en el trabajo de dichos esquemas, podemos concluir que a pesar que los tres esquemas identificados resuelven el caracol con ojo de forma muy similar, el predominio de la solución radial y la caída en el olvido de las soluciones tangencial y no radial estuvieron justificados. ABSTRACT This work focused on the helical staircase, also known as mallorca staircase, in the late 15th and the early 16th centuries. This seemingly simple element, presents however a wide variety of geometrical designs. Still to date, there is no explanation of why some of them were limited use, while others had a wide impact. The aim of this study was to assess the influence which comfort, safety and work economy –ease of construction and financial cost– had on their widespread. This study analysed, following these objective criteria, the geometry of the different designs, which solve the characteristic inner aperture of this type of staircase. And thus, find the reasons, which motivated their uneven spread. Once the theoretical framework was developed, two research lines were set up to address the problem. On one hand, this study analysed the theoretical helical staircases described in the early Spanish texts, which were written since the end of the 16th century. Although these texts were written after the Gothic Architecture system had already disappeared, some of its more representative elements like the helical staircase were still in use. The traces of the helical stair, which are included in these texts, describe the most frequently-used designs and even the most famous real examples which were usually constructed in the Late Gothic period. On the other hand, this study analysed some real helical staircases, which were constructed in the Ciudad y Tierra de Segovia. The reason for choosing these geographical limits is that most of the Gothic buildings of this area were constructed between the end of the 15th and the early 16th centuries, during the Late Gothic period. Both groups, theoretical and constructed samples, collect the Gothic construction tradition of the helical stair. Each of these research lines followed a specific methodology, which was adapted to the research object. Nevertheless, both are based on a first individual analysis of each selected example and a later on comparative one. These analysis allowed the identification of the different layouts which solve the inner aperture of the helical staircase, what made possible to set up a broad classification in three groups, according to the geometrical design strategy that solves the central aperture: the helical staircase by radial solution, the helical staircase by non-radial solution and the helical staircase by tangent solution. The research findings of the comparative analysis of their geometrical parameters, which have an impact in the comfort, safety and work economy, have shown that although all of them solve the helical staircase in a really similar way, the fade into oblivion of the non-radial and tangent approaches was objectively motivated.
Resumo:
Los fenómenos dinámicos pueden poner en peligro la integridad de estructuras aeroespaciales y los ingenieros han desarrollado diferentes estrategias para analizarlos. Uno de los grandes problemas que se plantean en la ingeniería es cómo atacar un problema dinámico estructural. En la presente tesis se plantean distintos fenómenos dinámicos y se proponen métodos para estimar o simular sus comportamientos mediante un análisis paramétrico determinista y aleatorio del problema. Se han propuesto desde problemas sencillos con pocos grados de libertad que sirven para analizar las diferentes estrategias y herramientas a utilizar, hasta fenómenos muy dinámicos que contienen comportamientos no lineales, daños y fallos. Los primeros ejemplos de investigación planteados cubren una amplia gama de los fenómenos dinámicos, como el análisis de vibraciones de elementos másicos, incluyendo impactos y contactos, y el análisis de una viga con carga armónica aplicada a la que también se le añaden parámetros aleatorios que pueden responder a un desconocimiento o incertidumbre de los mismos. Durante el desarrollo de la tesis se introducen conceptos y se aplican distintos métodos, como el método de elementos finitos (FEM) en el que se analiza su resolución tanto por esquemas implícitos como explícitos, y métodos de análisis paramétricos y estadísticos mediante la técnica de Monte Carlo. Más adelante, una vez ya planteadas las herramientas y estrategias de análisis, se estudian fenómenos más complejos, como el impacto a baja velocidad en materiales compuestos, en el que se busca evaluar la resistencia residual y, por lo tanto, la tolerancia al daño de la estructura. Se trata de un suceso que puede producirse por la caída de herramienta, granizo o restos en la pista de aterrizaje. Otro de los fenómenos analizados también se da en un aeropuerto y se trata de la colisión con un dispositivo frangible, el cual tiene que romperse bajo ciertas cargas y, sin embargo, soportar otras. Finalmente, se aplica toda la metodología planteada en simular y analizar un posible incidente en vuelo, el fenómeno de la pérdida de pala de un turbohélice. Se trata de un suceso muy particular en el que la estructura tiene que soportar unas cargas complejas y excepcionales con las que la aeronave debe ser capaz de completar con éxito el vuelo. El análisis incluye comportamientos no lineales, daños, y varios tipos de fallos, y en el que se trata de identificar los parámetros clave en la secuencia del fallo. El suceso se analiza mediante análisis estructurales deterministas más habituales y también mediante otras técnicas como el método de Monte Carlo con el que se logran estudiar distintas incertidumbres en los parámetros con variables aleatorias. Se estudian, entre otros, el tamaño de pala perdida, la velocidad y el momento en el que se produce la rotura, y la rigidez y resistencia de los apoyos del motor. Se tiene en cuenta incluso el amortiguamiento estructural del sistema. Las distintas estrategias de análisis permiten obtener unos resultados valiosos e interesantes que han sido objeto de distintas publicaciones. ABSTRACT Dynamic phenomena can endanger the integrity of aerospace structures and, consequently, engineers have developed different strategies to analyze them. One of the major engineering problems is how to deal with the structural dynamics. In this thesis, different dynamic phenomena are introduced and several methods are proposed to estimate or simulate their behaviors. The analysis is considered through parametric, deterministic and statistical methods. The suggested issues are from simple problems with few degrees of freedom, in order to develop different strategies and tools to solve them, to very dynamic phenomena containing nonlinear behaviors failures, damages. The first examples cover a wide variety of dynamic phenomena such as vibration analysis of mass elements, including impacts and contacts, and beam analysis with harmonic load applied, in which random parameters are included. These parameters can represent the unawareness or uncertainty of certain variables. During the development of the thesis several concepts are introduced and different methods are applied, such as the finite element method (FEM), which is solved through implicit and explicit schemes, and parametrical and statistical methods using the Monte Carlo analysis technique. Next, once the tools and strategies of analysis are set out more complex phenomena are studied. This is the case of a low-speed impact in composite materials, the residual strength of the structure is evaluated, and therefore, its damage tolerance. This incident may occur from a tool dropped, hail or debris throw on the runway. At an airport may also occur, and it is also analyzed, a collision between an airplane and a frangible device. The devise must brake under these loads, however, it must withstand others. Finally, all the considered methodology is applied to simulate and analyze a flight incident, the blade loss phenomenon of a turboprop. In this particular event the structure must support complex and exceptional loads and the aircraft must be able to successfully complete the flight. Nonlinear behavior, damage, and different types of failures are included in the analysis, in which the key parameters in the failure sequence are identified. The incident is analyzed by deterministic structural analysis and also by other techniques such as Monte Carlo method, in which it is possible to include different parametric uncertainties through random variables. Some of the evaluated parameters are, among others, the blade loss size, propeller rotational frequency, speed and angular position where the blade is lost, and the stiffness and strength of the engine mounts. The study does also research on the structural damping of the system. The different strategies of analysis obtain valuable and interesting results that have been already published.
Resumo:
In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.