986 resultados para main components
Resumo:
Six soft sediment cores, up to and over 9 m in length, and additional surface samples were selected for study of their planktonic foraminifera to provide information on the Holocene and Pleistocene stratigraphy of the West African continental margin south of the present boundary of the Sahara. The material was collected by the German research vessel "Meteor" during Cruise 25 in 1971. The residues larger than 160 microns determined, counted and statistically evaluated. Stratigraphical correlations with trans- Antlantic regions are given by occurrence of Truncorotalidoides hexagonus and Globorotalia tumidula flexuosa which mark the last interglacial stage. According to the climatic record the two deep-sea cores extend down to the V-zone, considered here as equivalent to the Mindel-Riss-interglacial time, as there are three distinctly warm and two cold periods indicated in the cores by planktonic foraminiferal faunas. Z-zone = Holocene is present in all cores, Y-zone = Wuermian glacial can be divided into five section, three cold and two warm stages; the X-zone can be divided into three warm stages, separated into two cool periods. The earliest warm stage is indicated to be the warmest one. There are excellent correlations to the Camp century ice core from Greenland, to the Mediterranean, to the Carribean and to the tropical Atlantic as well as to the Barnados stage. The W-zone was correlated to the Riss-glacial. V-zone is a warm period, the upper limit of which being not sufficiently defined, which contains also some cool sections. Increasing sedimentation rates from the deep-sea to the upper slope reveal climatic and regional details in Holocene and Late Pleistocene history of the continental margin. These were based mainly on different parameters of planktonic foraminiferal thanatocoenoses which are the main components of the size fraction >160 microns of the pelagic core. They become incerasingly diluted by other faunal and terrigenous components with decreasing slope depths. Estimates of absolute abundances, ranging from 25000 specimens/gm of sediment in the deep sea to less than 100, indicate various sedimentary processes at the continental margin. An ecological correlation by dominant species is possible. Readily computed temperature indices of different scales are presented which indicate, for instance, three distinctly cold sections within the last glacial and seven warm sections within the last interglacial lime. These are used for estimates of sedimentation rates. During cold periods sedimentation rates are higher than during warmer periods. Stratigraphic correlation and faunal record, combined with absolute abundances and sedimentation rates, indicated that in the deep sea turbidity currents not only cause high sedimentation rates for short periods of time, but also that material is occasionally eroded. Effects of upwelling may be detected in the surfacc sediment samples as well as in late Pleistocene and early Holocene samples of the slope by planktonic foraminiferal data which are not influenced by sedimentary processes.
Resumo:
This paper reports results of a geochemical study of suspended particulate matter and particle fluxes in the Norwegian Sea above the Bear Island slope. Concentrations of suspended particles and the main components of suspended matter were determined in the euphotic, intermediate (clean water), and bottom nepheloid layers. It was shown that biogenic components are predominant in water above the nepheloid layer, whereas suspended matter of the nepheloid layer is formed by resuspension of lithogenic components of bottom sediments. Chemical compositions of suspended matter and material collected in sediment traps are identical.
Resumo:
We present a 3 year record of deep water particle flux at the recently initiated ESTOC (European Station for Time-series in the Ocean, Canary Islands) located in the eastern subtropical North Atlantic gyre. Particle flux was highly seasonal, with flux maxima occurring in late winter-early spring. A comparison with historic CZCS (Coastal Zone Colour Scanner) data shows that these flux maxima occurred about 1 month after maximum chlorophyll was observed in surface waters in a presumed primary source region 100 km * 100 km northeast of the trap location. The main components of the particles collected with the traps were mineral particles and carbonate, both correlating strongly with organic matter sedimentation. Mineral particles in the sinking matter are indicative of the high aeolian input from the African desert regions. Comparing particle fluxes at 1 km and 3 km depth, we find that particle sedimentation increased substantially with depth. Yearly organic carbon sedimentation was 0.6 g m**-2 at 1 km depth compared with 0.8 g m**-2 at 3 km. We hypothesize that higher phytoplankton biomass observed further north could be a source of laterally advecting particles that interact with fast sinking particles originating from the primary source region. This hypothesis is also supported by the differences in size distribution of lithogenic matter found at the two trap depths.
Resumo:
The present volume contains the planktological data collected during the expedition of the "Meteor" to the Indian Ocean in 1964/65. It was the main objective of the expedition to study the up- and downwelling conditioned along the western and eastern coasts of the Arabian Sea by the northeastern monsoon. It is from these areas that the greater part of the data here presented was obtained. A few values from the Red Sea have been added. As the title "Planktological-Chemical Data" implies, it was chiefly with the help of chemical methods that the planktological investigations, with the exception of the particle size analysis and phytoplankton counting conducted optically, were carried out. These investigations were above all devoted to a quantitative survey of particulate matter and plankton, the latter being sampled by water-bottle and net. The zooplankton hauls were taken with the Indian Ocean Standard Net according to the international guidelines laid down for the expedition. As a rule, double catches were made at every station, one sample being intended for laboratory analysis at the Indian Ocean Biological Centre in Ernakulam, South India, and the other for the Institut für Meereskunde in Kiel. In addition to determining the standing stock, the production rate of phytoplankton was measured by the 14C method. These experiments were mainly conducted during the latter half of the expedition. The planktological studies primarily covered the euphotic zone, extending into the underlying water layers up to a depth of 600 m. The investigations were above all directed towards ascertaining the quantity of organic substance, formed by primary production, in its relation to environmental conditions and determining whether or not organic substance is actively transported from the surface into the deeper layers by the periodically migration organisms of the deep scattering layers. Depending on the station time available, a few samples could now and then be taken from deeper layers. The present volume of planktological-chemical data addresses itself to all those concerned processing the extensive material collected during the International Indian Ocean Expedition. As a readily accessible work of reference, it hopes to serve as an aid in the evaluation and interpretation of the expedition results. The complementary ecological data such as temperature, salinity, and oxygen content as well as the figures obtained on abundance and distribution in depth of the nutrients essential for primary production may be found in the volume of physical-chemical data published in Series A of the "Meteor"-Forschungsergebnisse No. 2, 1966 (Dietrich et al., 1966).
Resumo:
Results of studying isotopic composition of helium in underground fluids of the Baikal-Mongolian region during the last quarter of XX century are summarized. Determinations of 3He/4He ratio in 139 samples of gas phase from fluids, collected at 104 points of the Baikal rift zone and adjacent structures are given. 3He/4He values lie within the range from 1x10**-8 (typical for crustal radiogenic helium) to 1.1x10**-5 (close to typical MORB reservoir). Repeated sampling in some points during more than 20 years showed stability of helium isotopic composition in time in each of them at any level of 3He/4He values. There is no systematic differences of 3He/4He in samples from surface water sources and deeper intervals of boreholes in the same areas. Universal relationship between isotopic composition of helium and general composition of gas phase is absent either, but the minimum 3He/4He values occurred in methane gas of hydrocarbon deposits, whereas in nitrogen and carbon dioxide gases of helium composition varied (in the latter maximum 3He/4He values have been measured). According to N2/Ar_atm ratio nitrogen gases are atmospheric. In carbonic gas fN2/fNe ratio indicates presence of excessive (non-atmogenic) nitrogen, but the attitude CO2/3He differs from one in MORB. Comparison of helium isotopic composition with its concentration and composition of the main components of gas phase from fluids shows that it is formed under influence of fractionation of components with different solubility in the gas-water system and generation/consumption of reactive gases in the crust. Structural and tectonic elements of the region differ from the spectrum of 3He/4He values. At the pre-Riphean Siberian Platform the mean 3He/4He = (3.6+/-0.9)x10**- 8 is very close to radiogenic one. In the Paleozoic crust of Khangay 3He/4He = (16.3+/-4.6)x10**-8, and the most probable estimate is (12.3+/-2.9)x10**-8. In structures of the eastern flank of the Baikal rift zone (Khentei, Dauria) affected by the Mz-Kz activization 3He/4He values range from 4.4x10**-8 to 2.14x10**-6 (average 0.94x10**-6). Distribution of 3He/4He values across the strike of the Baikal rift zone indicates advective heat transfer from the mantle not only in the rift zone, but also much further to the east. In fluids of the Baikal rift zone range of 3He/4He values is the widest: from 4x10**-8 to 1.1x10**-5. Their variations along the strike of the rift zone are clearly patterned, namely, decrease of 3He/4He values in both directions from the Tunka depression. Accompanied by decrease in density of conductive heat flow and in size of rift basins, this trend indicates decrease in intensity of advective heat transfer from the mantle to peripheral segments of the rift zone. Comparing this trend with data on other continental rift zones and mid-ocean ridges leads to the conclusion about fundamental differences in mechanisms of interaction between the crust and the mantle in these environments.
Resumo:
La capacidad de comunicación de los seres humanos ha crecido gracias a la evolución de dispositivos móviles cada vez más pequeños, manejables, potentes, de mayor autonomía y más asequibles. Esta tendencia muestra que en un futuro próximo cercano cada persona llevaría consigo por lo menos un dispositivo de altas prestaciones. Estos dispositivos tienen incorporados algunas formas de comunicación: red de telefonía, redes inalámbricas, bluetooth, entre otras. Lo que les permite también ser empleados para la configuración de redes móviles Ad Hoc. Las redes móviles Ad Hoc, son redes temporales y autoconfigurables, no necesitan un punto de acceso para que los nodos intercambien información entre sí. Cada nodo realiza las tareas de encaminador cuando sea requerido. Los nodos se pueden mover, cambiando de ubicación a discreción. La autonomía de estos dispositivos depende de las estrategias de como sus recursos son utilizados. De tal forma que los protocolos, algoritmos o modelos deben ser diseñados de forma eficiente para no impactar el rendimiento del dispositivo, siempre buscando un equilibrio entre sobrecarga y usabilidad. Es importante definir una gestión adecuada de estas redes especialmente cuando estén siendo utilizados en escenarios críticos como los de emergencias, desastres naturales, conflictos bélicos. La presente tesis doctoral muestra una solución eficiente para la gestión de redes móviles Ad Hoc. La solución contempla dos componentes principales: la definición de un modelo de gestión para redes móviles de alta disponibilidad y la creación de un protocolo de enrutamiento jerárquico asociado al modelo. El modelo de gestión propuesto, denominado High Availability Management Ad Hoc Network (HAMAN), es definido en una estructura de cuatro niveles, acceso, distribución, inteligencia e infraestructura. Además se describen los componentes de cada nivel: tipos de nodos, protocolos y funcionamiento. Se estudian también las interfaces de comunicación entre cada componente y la relación de estas con los niveles definidos. Como parte del modelo se diseña el protocolo de enrutamiento Ad Hoc, denominado Backup Cluster Head Protocol (BCHP), que utiliza como estrategia de encaminamiento el empleo de cluster y jerarquías. Cada cluster tiene un Jefe de Cluster que concentra la información de enrutamiento y de gestión y la envía al destino cuando esta fuera de su área de cobertura. Para mejorar la disponibilidad de la red el protocolo utiliza un Jefe de Cluster de Respaldo el que asume las funciones del nodo principal del cluster cuando este tiene un problema. El modelo HAMAN es validado a través de un proceso la simulación del protocolo BCHP. El protocolo BCHP se implementa en la herramienta Network Simulator 2 (NS2) para ser simulado, comparado y contrastado con el protocolo de enrutamiento jerárquico Cluster Based Routing Protocol (CBRP) y con el protocolo de enrutamiento Ad Hoc reactivo denominado Ad Hoc On Demand Distance Vector Routing (AODV). Abstract The communication skills of humans has grown thanks to the evolution of mobile devices become smaller, manageable, powerful, more autonomy and more affordable. This trend shows that in the near future each person will carry at least one high-performance device. These high-performance devices have some forms of communication incorporated: telephony network, wireless networks, bluetooth, among others. What can also be used for configuring mobile Ad Hoc networks. Ad Hoc mobile networks, are temporary and self-configuring networks, do not need an access point for exchange information between their nodes. Each node performs the router tasks as required. The nodes can move, change location at will. The autonomy of these devices depends on the strategies of how its resources are used. So that the protocols, algorithms or models should be designed to efficiently without impacting device performance seeking a balance between overhead and usability. It is important to define appropriate management of these networks, especially when being used in critical scenarios such as emergencies, natural disasters, wars. The present research shows an efficient solution for managing mobile ad hoc networks. The solution comprises two main components: the definition of a management model for highly available mobile networks and the creation of a hierarchical routing protocol associated with the model. The proposed management model, called High Availability Management Ad Hoc Network (HAMAN) is defined in a four-level structure: access, distribution, intelligence and infrastructure. The components of each level: types of nodes, protocols, structure of a node are shown and detailed. It also explores the communication interfaces between each component and the relationship of these with the levels defined. The Ad Hoc routing protocol proposed, called Backup Cluster Head Protocol( BCHP), use of cluster and hierarchies like strategies. Each cluster has a cluster head which concentrates the routing information and management and sent to the destination when out of cluster coverage area. To improve the availability of the network protocol uses a Backup Cluster Head who assumes the functions of the node of the cluster when it has a problem. The HAMAN model is validated accross the simulation of their BCHP routing protocol. BCHP protocol has been implemented in the simulation tool Network Simulator 2 (NS2) to be simulated, compared and contrasted with a hierarchical routing protocol Cluster Based Routing Protocol (CBRP) and a routing protocol called Reactive Ad Hoc On Demand Distance Vector Routing (AODV).
Resumo:
El modo tradicional de estimar el nivel de seguridad vial es el registro de accidentes de tráfico, sin embargo son altamente variables, aleatorios y necesitan un periodo de registro de al menos 3 años. Existen metodologías preventivas en las cuales no es necesario que ocurra un accidente para determinar el nivel de seguridad de una intersección, como lo es la técnica de los conflictos de tráfico, que introduce las mediciones alternativas de seguridad como cuantificadoras del riesgo de accidente. El objetivo general de la tesis es establecer una metodología que permita clasificar el riesgo en intersecciones interurbanas, en función del análisis de conflictos entre vehículos, realizado mediante las variables alternativas o indirectas de seguridad vial. La metodología para el análisis y evaluación temprana de la seguridad en una intersección, estará basada en dos medidas alternativas de seguridad: el tiempo hasta la colisión y el tiempo posterior a la invasión de la trayectoria. El desarrollo experimental se realizó mediante estudios de campo, para la parte exploratoria de la investigación, se seleccionaron 3 intersecciones interurbanas en forma de T donde se obtuvieron las variables que caracterizan los conflictos entre vehículos; luego mediante técnicas de análisis multivariante, se obtuvo los modelos de clasificación del riesgo cualitativo y cuantitativo. Para la homologación y el estudio final de concordancia entre el índice propuesto y el modelo de clasificación, se desarrollaron nuevos estudios de campo en 6 intersecciones interurbanas en forma de T. El índice de riesgo obtenido resulta una herramienta muy útil para realizar evaluaciones rápidas conducentes a estimar la peligrosidad de una intersección en T, debido a lo simple y económico que resulta obtener los registros de datos en campo, por medio de una rápida capacitación a operarios; la elaboración del informe de resultados debe ser por un especialista. Los índices de riesgo obtenidos muestran que las variables originales más influyentes son las mediciones de tiempo. Se pudo determinar que los valores más altos del índice de riesgo están relacionados a un mayor riesgo de que un conflicto termine en accidente. Dentro de este índice, la única variable cuyo aporte es proporcionalmente directo es la velocidad de aproximación, lo que concuerda con lo que sucede en un conflicto, pues una velocidad excesiva se manifiesta como un claro factor de riesgo ya que potencia todos los fallos humanos en la conducción. Una de las principales aportaciones de esta tesis doctoral a la ingeniería de carreteras, es la posibilidad de aplicación de la metodología por parte de administraciones de carreteras locales, las cuales muchas veces cuentan con recursos de inversión limitados para efectuar estudios preventivos, sobretodo en países en vías de desarrollo. La evaluación del riesgo de una intersección luego de una mejora en cuanto a infraestructura y/o dispositivos de control de tráfico, al igual que un análisis antes – después, pero sin realizar una comparación mediante la ocurrencia de accidentes, sino que por medio de la técnica de conflictos de tráfico, se puede convertir en una aplicación directa y económica. Además, se pudo comprobar que el análisis de componentes principales utilizado en la creación del índice de riesgo de la intersección, es una herramienta útil para resumir todo el conjunto de mediciones que son posibles de obtener con la técnica de conflictos de tráfico y que permiten el diagnóstico del riesgo de accidentalidad en una intersección. En cuanto a la metodología para la homologación de los modelos, se pudo establecer la validez y confiabilidad al conjunto de respuestas entregadas por los observadores en el registro de datos en campo, ya que los resultados de la validación establecen que la medición de concordancia de las respuestas entregadas por los modelos y lo observado, son significativas y sugieren una alta coincidencia entre ellos. ABSTRACT The traditional way of estimating road safety level is the record of occurrence of traffic accidents; however, they are highly variable, random, and require a recording period of at least three years. There are preventive methods which do not need an accident to determine the road safety level of an intersection, such as traffic conflict technique, which introduces surrogate safety measures as parameters for the evaluation of accident risks. The general objective of the thesis is to establish a methodology that will allow the classification of risk at interurban intersections as a function of the analysis of conflicts between vehicles performed by means of surrogate road safety variables. The proposal of a methodology for the analysis and early evaluation of safety at an intersection will be based on two surrogate safety measures: the time to collision and the post encroachment time. On the other hand, the experimental development has taken place by means of field studies in which the exploratory part of the investigation selected three interurban T-intersections where the application of the traffic conflict technique gave variables that characterize the conflicts between vehicles; then, using multivariate analysis techniques, the models for the classification of qualitative and quantitative risk were obtained. With the models new field studies were carried out at six interurban Tintersections with the purpose of developing the homologation and the final study of the agreement between the proposed index and the classification model. The risk index obtained is a very useful tool for making rapid evaluations to estimate the hazard of a T-intersection, as well as for getting simply and economically the field data records after a fast training of the workers and then preparing the report of results by a specialist. The risk indices obtained show that the most influential original variables are the measurements of time. It was determined that the highest risk index values are related with greater risk of a conflict resulting in an accident. Within this index, the only variable whose contribution is proportionally direct is the approach speed, in agreement with what happens in a conflict, because excessive speed appears as a clear risk factor at an intersection because it intensifies all the human driving faults. One of the main contributions of this doctoral thesis to road engineering is the possibility of applying the methodology by local road administrations, which very often have limited investment resources to carry out these kinds of preventive studies, particularly in developing countries. The evaluation of the risk at an intersection after an improvement in terms of infrastructure and/or traffic control devices, the same as a before/after analysis, without comparison of accident occurrence but by means of the traffic conflict technique, can become a direct and economical application. It is also shown that main components analysis used for producing the risk index of the intersection is a useful tool for summarizing the whole set of measurements that can be obtained with the traffic conflict technique and allow diagnosing accident risk at an intersection. As to the methodology for the homologation of the models, the validity and reliability of the set of responses delivered by the observers recording the field data could be established, because the results of the validation show that agreement between the observations and the responses delivered by the models is significant and highly coincident.
Resumo:
This paper proposes a novel design of a reconfigurable humanoid robot head, based on biological likeness of human being so that the humanoid robot could agreeably interact with people in various everyday tasks. The proposed humanoid head has a modular and adaptive structural design and is equipped with three main components: frame, neck motion system and omnidirectional stereovision system modules. The omnidirectional stereovision system module being the last module, a motivating contribution with regard to other computer vision systems implemented in former humanoids, it opens new research possibilities for achieving human-like behaviour. A proposal for a real-time catadioptric stereovision system is presented, including stereo geometry for rectifying the system configuration and depth estimation. The methodology for an initial approach for visual servoing tasks is divided into two phases, first related to the robust detection of moving objects, their depth estimation and position calculation, and second the development of attention-based control strategies. Perception capabilities provided allow the extraction of 3D information from a wide range of visions from uncontrolled dynamic environments, and work results are illustrated through a number of experiments.
Resumo:
In this article we describe a method for automatically generating text summaries of data corresponding to traces of spatial movement in geographical areas. The method can help humans to understand large data streams, such as the amounts of GPS data recorded by a variety of sensors in mobile phones, cars, etc. We describe the knowledge representations we designed for our method and the main components of our method for generating the summaries: a discourse planner, an abstraction module and a text generator. We also present evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions.
Resumo:
ENAGAS tiene la intención de ampliar el Terminal de Regasificación de GNL que tiene en el puerto de Barcelona. El presente Proyecto Básico define las instalaciones de uno de los Tanques de almacenamiento de GNL que se van a construir dentro del Alcance de dicha ampliación, con el suficiente detalle como para permitir a ENAGAS acometer las tareas previas a la ejecución del proyecto, a saber: 1. Planificar y presupuestar la fase de ejecución 2. Solicitar los Permisos y Autorizaciones necesarias de los Organismos competentes 3. Lanzar la Petición de Ofertas para el concurso llave en mano del EPC. Los trabajos de Ingeniería contenidos en el Proyecto Básico son los siguientes: Antecedentes y Datos básicos, Criterios de diseño, Descripción de instalaciones, Cálculos estructurales, Planos del Tanque de GNL, Definición de equipos y materiales a utilizar, Plan de ejecución del proyecto, Especificaciones técnicas para Ingeniería, Compras y Construcción, Paquete para Petición de Ofertas del EPC, Condiciones técnicas particulares, Programa de ejecución y Presupuesto de inversiones. ABSTRACT ENAGAS is expanding its LNG Regasification Terminal located in Barcelona Port (Spain). This Document reports the Front End Engineering and Design (FEED) works undertaken in relation to one of the LNG Storage Tanks to be built within the scope of that expansion. The Project FEED hereby presented comprehensively defines the LNG Storage Tank so as to allow ENAGAS to perform next stages of the Works, namely: 1. Plan and budget the Project Execution phase 2. Request Regulatory authorizations 3. Invite Contractors to bid for the LNG Tank EPC. Main components of the FEED Document contents are as follow:Background and Basic Data, Design Criteria, Description of LNG Tank elements, Engineering Calculations, LNG Tank Drawings, Equipment and Materials definition, Project Execution Plan (PEP), Technical Conditions, EPC Invitation to Tender (ITT) package, Execution Schedule and Cost Estimate.
Resumo:
La proliferación en todo el mundo de las soluciones basadas en la nube hace que las empresas estén valorando mover su infraestructura o parte de ella a la nube, para así reducir los altos costes de inversión necesarios para mantener una infraestructura privada. Uno de los servicios que puede ser centralizado en la nube, mediante recursos compartidos entre varios clientes, son las soluciones de contingencia, como los servicios de protección de datos o los centros de recuperación ante desastres. Mediante este proyecto se pretende llevar a cabo el despliegue de una plataforma de servicios gestionados para ofrecer soluciones centralizadas, a clientes que lo requieran, de copias de seguridad y disaster recovery. Para la realización del proyecto se realizó un estudio de las tecnologías actuales para llevar a cabo la continuidad de negocio, los distintos tipos de backups, así como los tipos de replicación existente, local y remota. Posteriormente, se llevó a cabo un estudio de mercado para barajar las distintas posibilidades existentes para el despliegue de la infraestructura, siempre teniendo en cuenta el cliente objetivo. Finalmente, se realizó la fase de desarrollo, donde se detallan los componentes principales que componen la solución final, la localización de la infraestructura, un caso de uso, así como las principales ventajas de la solución. Se ha de destacar que se trata de un proyecto real, que se llevó a cabo en una empresa externa a la facultad, Omega Peripherals, donde una vez finalizado mi prácticum, se propuso dicho proyecto para desarrollarlo como continuación de mi labor en la empresa y formar parte de mi Trabajo Final de Grado (TFG). ---ABSTRACT---The worldwide proliferation of cloud-based solutions means that companies are evaluating their infrastructure or move part of it to the cloud, to reduce the high investment costs required to maintain a private infrastructure. One of the services that can be centralized in the cloud, through shared resources between multiple clients, are the solutions of contingency services as data protection or disaster recovery centers. This project aims to carry out the deployment of a managed services platform centralized solutions, to customers who need it, backup and disaster recovery services. The project consists of three phases. First, It was performed a study of the current business continuity technologies, the different types of backups, as well as replication types, local and remote. Second, it was performed a market study to shuffle the different possibilities for the deployment of infrastructure, keeping in mind the target customer. Finally, we found the development phase, where it details the main components that make up the final solution, the location of infrastructure, a use case, as well as the main advantages of the solution. It should be emphasized that this is a real project, which was carried out in an outside company to the university, called Omega Peripherals, where once I completed my practicum, it was proposed this project to develop it as a continuation of my job and develop it as my final dissertation.
Resumo:
Son numerosos los expertos que predicen que hasta pasado 2050 no se utilizarán masivamente las energías de origen renovable, y que por tanto se mantendrá la emisión de dióxido de carbono de forma incontrolada. Entre tanto, y previendo que este tipo de uso se mantenga hasta un horizonte temporal aún más lejano, la captura, concentración y secuestro o reutilización de dióxido de carbono es y será una de las principales soluciones a implantar para paliar el problema medioambiental causado. Sin embargo, las tecnologías existentes y en desarrollo de captura y concentración de este tipo de gas, presentan dos limitaciones: las grandes cantidades de energía que consumen y los grandes volúmenes de sustancias potencialmente dañinas para el medioambiente que producen durante su funcionamiento. Ambas razones hacen que no sean atractivas para su implantación y uso de forma extensiva. La solución planteada en la presente tesis doctoral se caracteriza por la ausencia de residuos producidos en la operación de captura y concentración del dióxido de carbono, por no utilizar substancias químicas y físicas habituales en las técnicas actuales, por disminuir los consumos energéticos al carecer de sistemas móviles y por evitar la regeneración química y física de los materiales utilizados en la actualidad. Así mismo, plantea grandes retos a futuras innovaciones sobre la idea propuesta que busquen fundamentalmente la disminución de la energía utilizada durante su funcionamiento y la optimización de sus componentes principales. Para conseguir el objetivo antes citado, la presente tesis doctoral, una vez establecido el planteamiento del problema al que se busca solución (capítulo 1), del estudio de las técnicas de separación de gases atmosféricos utilizadas en la actualidad, así como del de los sistemas fundamentales de las instalaciones de captura y concentración del dióxido de carbono (capítulo 2) y tras una definición del marco conceptual y teórico (capítulo 3), aborda el diseño de un prototipo de ionización fotónica de los gases atmosféricos para su posterior separación electrostática, a partir del estudio, adaptación y mejora del funcionamiento de los sistemas de espectrometría de masas. Se diseñarán y desarrollarán los sistemas básicos de fotoionización, mediante el uso de fuentes de fotones coherentes, y los de separación electrostática (capítulo 4), en que se basa el funcionamiento de este sistema de separación de gases atmosféricos y de captura y concentración de dióxido de carbono para construir un prototipo a nivel laboratorio. Posteriormente, en el capítulo 5, serán probados utilizando una matriz experimental que cubra los rangos de funcionamiento previstos y aporte suficientes datos experimentales para corregir y desarrollar el marco teórico real, y con los que se pueda establecer y corregir un modelo físico– matemático de simulación (capítulo 6) aplicable a la unidad en su conjunto. Finalmente, debido a la utilización de unidades de ionización fotónica, sistemas láseres intensos y sistemas eléctricos de gran potencia, es preciso analizar el riesgo biológico a las personas y al medioambiente debido al impacto de la radiación electromagnética producida (capítulo 7), minimizando su impacto y cumpliendo con la legislación vigente. En el capítulo 8 se planteará un diseño escalable a tamaño piloto de la nueva tecnología propuesta y sus principales modos de funcionamiento, así como un análisis de viabilidad económica. Como consecuencia de la tesis doctoral propuesta y del desarrollo de la unidad de separación atmosférica y de captura y concentración de dióxido de carbono, surgen diversas posibilidades de estudio que pueden ser objeto de nuevas tesis doctorales y de futuros desarrollos de ingeniería. El capítulo 9 tratará de incidir en estos aspectos indicando líneas de investigación para futuras tesis y desarrollos industriales. ABSTRACT A large number of experts predict that until at least 2050 renewable energy sources will not be massively used, and for that reason, current Primary Energy sources based on extensive use of fossil fuel will be used maintaining out of control emissions, Carbon Dioxide above all. Meanwhile, under this scenario and considering its extension until at least 2050, Carbon Capture, Concentration, Storage and/or Reuse is and will be one of the main solutions to minimise Greenhouse Gasses environmental effect. But, current Carbon Capture and Storage technology state of development has two main problems: it is a too large energy consuming technology and during normal use it produces a large volume of environmentally dangerous substances. Both reasons are limiting its development and its extensive use. This Ph Degree Thesis document proposes a solution to get the expected effect using a new atmospheric gasses separation system with the following characteristics: absence of wastes produced, it needs no chemical and/or physical substances during its operation, it reduces to minimum the internal energy consumptions due to absence of mobile equipment and it does not need any chemical and/or physical regeneration of substances. This system is beyond the State of the Art of current technology development. Additionally, the proposed solution raises huge challenges for future innovations of the proposed idea finding radical reduction of internal energy consumption during functioning, as well as regarding optimisation of main components, systems and modes of operation. To achieve this target, once established the main problem, main challenge and potential solving solutions (Chapter 1), it is established an initial starting point fixing the Atmospheric Gasses Separation and Carbon Capture and Storage developments (Chapter 2), as well as it will be defined the theoretical and basic model, including existing and potential new governing laws and mathematical formulas to control its system functioning (Chapter 3), this document will deal with the design of an installation of an operating system based on photonic ionization of atmospheric gasses to be separated in a later separation system based on the application of electrostatic fields. It will be developed a basic atmospheric gasses ionization prototype based on intense radioactive sources capable to ionize gasses by coherent photonic radiation, and a basic design of electrostatic separation system (Chapter 4). Both basic designs are the core of the proposed technology that separates Atmospheric Gasses and captures and concentrates Carbon Dioxide. Chapter 5 will includes experimental results obtained from an experimental testing matrix covering expected prototype functioning regimes. With the obtained experimental data, theoretical model will be corrected and improved to act as the real physical and mathematical model capable to simulate real system function (Chapter 6). Finally, it is necessary to assess potential biological risk to public and environment due to the proposed use of units of intense energy photonic ionization, by laser beams or by non–coherent sources and large electromagnetic systems with high energy consumption. It is necessary to know the impact in terms of and electromagnetic radiation taking into account National Legislation (Chapter 7). On Chapter 8, an up scaled pilot plant will be established covering main functioning modes and an economic feasibility assessment. As a consequence of this PhD Thesis, a new field of potential researches and new PhD Thesis are opened, as well as future engineering and industrial developments (Chapter 9).
Resumo:
Los avances que se han producido en los últimos años en cuanto a potencia y capacidades de los teléfonos móviles que usamos de manera cotidiana, traen de la mano un auge en la demanda de aplicaciones de todo ámbito: desde aplicaciones generales de consumo, pasando por juegos, hasta aplicaciones que ofrecen soluciones internas a empresas. Existen diferentes sistemas operativos para teléfonos móviles como se explicará más adelante en el capítulo introductorio. En dicho capítulo se da la justificación de por qué en el presente Proyecto Fin de Carrera se centra en el estudio del sistema operativo Android. Primeramente se dará una visión global del estado del arte en cuanto al mundo de aplicaciones móviles se refiere. Se explicarán los pros y contras de cada sistema operativo, detallando el lenguaje de programación utilizado en cada uno de ellos y sus principales características. Después, en el capítulo tres se estudiará con más profundidad el sistema operativo Android, desde su historia y orígenes, hasta los componentes básicos para la creación de una aplicación, pasando por la arquitectura interna del sistema o su máquina virtual. Con esto se pretende que el lector tenga un contexto que le permita comprender los siguientes capítulos, que es donde está el núcleo de este Proyecto Fin de Carrera. El cuarto capítulo trata de una serie de prácticas incrementales, que cubren una gran parte de las posibilidades que ofrece el sistema operativo Android para el desarrollo de aplicaciones. Se ha pretendido que la dificultad vaya de menos a más y que las prácticas se vayan apoyando en las anteriores, para tener al final una única solución que englobe todas las lecciones. El último capítulo quiere englobar el uso de todas las lecciones aprendidas en las lecciones anteriores para crear una aplicación que bien podría ser una aplicación real para un cliente. Se trata de una aplicación que muestra en tiempo real información sobre las cámaras de tráfico de la ciudad de Madrid. ABSTRACT. The improvements that have occurred in recent years in terms of power and capabilities of mobile phones that we use on a daily basis, bring an increment in demand for all kind of applications, from general consumer applications, games or even internal applications that offer solutions to companies. There are different operating systems for mobile phones as will be explained later in the introductory chapter. In that chapter the answer for why this Thesis focuses on the study of the Android operating system is given as well. First an overview of the state of the art about the world of mobile applications will be referred. The pros and cons of each operating system will be explained, detailing the programming language used in each of them and their main characteristics. Then in chapter three will be discussed in more depth the Android operating system, from its history and beginnings to the main components for the creation of an application, to the internal architecture of the system or virtual machine. The goal of chapter three is to give the readers a context that allows them to understand the following chapters, where the core of this Thesis is. The fourth chapter contains a series of incremental practices covering a large part of the potential of the Android operating system for application development. Those practices grow in difficulty and are supported by the previous in order to have at the end a single solution that fits all lessons. The last chapter wants to embrace the use of all the lessons learned in previous lessons to create an application that could well be an actual application for a client. It is an application that displays real-time information off traffic cameras of the city of Madrid.
Resumo:
El objetivo de este proyecto se centra en la definición, diseño y cálculo de las principales instalaciones eléctricas de la isla de generación, típicas de una central de ciclo combinado en configuración mono-eje. Se procederá a la definición de la arquitectura del sistema de distribución eléctrica de la planta definiendo los equipos eléctricos necesarios para la alimentación y protección de los servicios auxiliares de la isla de potencia y estableciendo una filosofía basada en la optimización del dimensionamiento de los distintos componentes del sistema. El diseño de los componentes eléctricos de la planta se hará en base a los más estrictos estándares internacionales con los que se garantiza el cumplimiento de las condiciones de seguridad, tanto de las personas como la de los propios equipos, fiabilidad y funcionalidad. El siguiente paso consistirá en adaptar los equipos definidos a los existentes en el mercado y así, evitar los sobrecostes que conlleva la adquisición de equipos no estandarizados en el mercado. Abstract The main objective of the project is the definition, analysis and sizing of the main components of the electrical system of a combined cycle power plant. This includes generation, auxiliary services and emergency systems. The design is intended to meet the International Electrotechnical Commission´s standards. These will ensure the adequate and safe operation, taking into account all operation conditions of the plant and the mechanical calculation of the thermal balance by the sizing of main mechanical equipment. An acceptable level of safety and health of workers and equipment is a mandatory requirement. The final results obtained are equipment that are able to achieve the highest level of protection for workers, assets and environment.
Resumo:
The IARC competitions aim at making the state of the art in UAV progress. The 2014 challenge deals mainly with GPS/Laser denied navigation, Robot-Robot interaction and Obstacle avoidance in the setting of a ground robot herding problem. We present in this paper a drone which will take part in this competition. The platform and hardware it is composed of and the software we designed are introduced. This software has three main components: the visual information acquisition, the mapping algorithm and the Aritificial Intelligence mission planner. A statement of the safety measures integrated in the drone and of our efforts to ensure field testing in conditions as close as possible to the challenge?s is also included.