920 resultados para reasonable accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil–Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product–moment correlation coefficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic weighing systems based on load cells are commonly used to estimate crop yields in the field. There is lack of data, however, regarding the accuracy of such weighing systems mounted on harvesting machinery, especially on that used to collect high value crops such as fruits and vegetables. Certainly, dynamic weighing systems mounted on the bins of grape harvesters are affected by the displacement of the load inside the bin when moving over terrain of changing topography. In this work, the load that would be registered in a grape harvester bin by a dynamic weighing system based on the use of a load cell was inferred by using the discrete element method (DEM). DEM is a numerical technique capable of accurately describing the behaviour of granular materials under dynamic situations and it has been proven to provide successful predictions in many different scenarios. In this work, different DEM models of a grape harvester bin were developed contemplating different influencing factors. Results obtained from these models were used to infer the output given by the load cell of a real bin. The mass detected by the load cell when the bin was inclined depended strongly on the distribution of the load within the bin, but was underestimated in all scenarios. The distribution of the load was found to be dependent on the inclination of the bin caused by the topography of the terrain, but also by the history of inclination (inclination rate, presence of static periods, etc.) since the effect of the inertia of the particles (i.e., representing the grapes) was not negligible. Some recommendations are given to try to improve the accuracy of crop load measurement in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Análisis de precisión en modelos digitales de elevación globales. ABSTRACT: Terrain-Based Analysis results in derived products from an input DEM and these products are needed to perform various analyses. To efficiently use these products in decision-making, their accuracies must be estimated systematically. This paper proposes a procedure to assess the accuracy of these derived products, by calculating the accuracy of the slope dataset and its significance, taking as an input the accuracy of the DEM. Based on the output of previously published research on modeling the relative accuracy of a DEM, specifically ASTER and SRTM DEMs with Lebanon coverage as the area of study, analysis have showed that ASTER has a low significance in the majority of the area where only 2% of the modeled terrain has 50% or more significance. On the other hand, SRTM showed a better significance, where 37% of the modeled terrain has 50% or more significance. Statistical analysis deduced that the accuracy of the slope dataset, calculated on a cell-by-cell basis, is highly correlated to the accuracy of the input DEM. However, this correlation becomes lower between the slope accuracy and the slope significance, whereas it becomes much higher between the modeled slope and the slope significance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La Energía eléctrica producida mediante tecnología eólica flotante es uno de los recursos más prometedores para reducir la dependencia de energía proveniente de combustibles fósiles. Esta tecnología es de especial interés en países como España, donde la plataforma continental es estrecha y existen pocas áreas para el desarrollo de estructuras fijas. Entre los diferentes conceptos flotantes, esta tesis se ha ocupado de la tipología semisumergible. Estas plataformas pueden experimentar movimientos resonantes en largada y arfada. En largada, dado que el periodo de resonancia es largo estos puede ser inducidos por efectos de segundo orden de deriva lenta que pueden tener una influencia muy significativa en las cargas en los fondeos. En arfada las fuerzas de primer orden pueden inducir grandes movimientos y por tanto la correcta determinación del amortiguamiento es esencial para la analizar la operatividad de la plataforma. Esta tesis ha investigado estos dos efectos, para ello se ha usado como caso base el diseño de una plataforma desarrollada en el proyecto Europeo Hiprwind. La plataforma se compone de 3 columnas cilíndricas unidas mediante montantes estructurales horizontales y diagonales, Los cilindros proporcionan flotabilidad y momentos adrizante. A la base de cada columna se le ha añadido un gran “Heave Plate” o placa de cierre. El diseño es similar a otros diseños previos (Windfloat). Se ha fabricado un modelo a escala de una de las columnas para el estudio detallado del amortiguamiento mediante oscilaciones forzadas. Las dimensiones del modelo (1m diámetro en la placa de cierre) lo hacen, de los conocidos por el candidato, el mayor para el que se han publicado datos. El diseño del cilindro se ha realizado de tal manera que permite la fijación de placas de cierre planas o con refuerzo, ambos modelos se han fabricado y analizado. El modelo con refuerzos es una reproducción exacta del diseño a escala real incluyendo detalles distintivos del mismo, siendo el más importante la placa vertical perimetral. Los ensayos de oscilaciones forzadas se han realizado para un rango de frecuencias, tanto para el disco plano como el reforzado. Se han medido las fuerzas durante los ensayos y se han calculado los coeficientes de amortiguamiento y de masa añadida. Estos coeficientes son necesarios para el cálculo del fondeo mediante simulaciones en el dominio del tiempo. Los coeficientes calculados se han comparado con la literatura existente, con cálculos potenciales y por ultimo con cálculos CFD. Para disponer de información relevante para el diseño estructural de la plataforma se han medido y analizado experimentalmente las presiones en la parte superior e inferior de cada placa de cierre. Para la correcta estimación numérica de las fuerzas de deriva lenta en la plataforma se ha realizado una campaña experimental que incluye ensayos con modelo cautivo de la plataforma completa en olas bicromaticas. Pese a que estos experimentos no reproducen un escenario de oleaje realista, los mismos permiten una verificación del modelo numérico mediante la comparación de fuerzas medidas en el modelo físico y el numérico. Como resultados de esta tesis podemos enumerar las siguientes conclusiones. 1. El amortiguamiento y la masa añadida muestran una pequeña dependencia con la frecuencia pero una gran dependencia con la amplitud del movimiento. siendo coherente con investigaciones existentes. 2. Las medidas con la placa de cierre reforzada con cierre vertical en el borde, muestra un amortiguamiento significativamente menor comparada con la placa plana. Esto implica que para ensayos de canal es necesario incluir estos detalles en el modelo. 3. La masa añadida no muestra grandes variaciones comparando placa plana y placa con refuerzos. 4. Un coeficiente de amortiguamiento del 6% del crítico se puede considerar conservador para el cálculo en el dominio de la frecuencia. Este amortiguamiento es equivalente a un coeficiente de “drag” de 4 en elementos de Morison cuadráticos en las placas de cierre usadas en simulaciones en el dominio del tiempo. 5. Se han encontrado discrepancias en algunos valores de masa añadida y amortiguamiento de la placa plana al comparar con datos publicados. Se han propuesto algunas explicaciones basadas en las diferencias en la relación de espesores, en la distancia a la superficie libre y también relacionadas con efectos de escala. 6. La presión en la placa con refuerzos son similares a las de la placa plana, excepto en la zona del borde donde la placa con refuerzo vertical induce una gran diferencias de presiones entre la cara superior e inferior. 7. La máxima diferencia de presión escala coherentemente con la fuerza equivalente a la aceleración de la masa añadida distribuida sobre la placa. 8. Las masas añadidas calculadas con el código potencial (WADAM) no son suficientemente precisas, Este software no contempla el modelado de placas de pequeño espesor con dipolos, la poca precisión de los resultados aumenta la importancia de este tipo de elementos al realizar simulaciones con códigos potenciales para este tipo de plataformas que incluyen elementos de poco espesor. 9. Respecto al código CFD (Ansys CFX) la precisión de los cálculos es razonable para la placa plana, esta precisión disminuye para la placa con refuerzo vertical en el borde, como era de esperar dado la mayor complejidad del flujo. 10. Respecto al segundo orden, los resultados, en general, muestran que, aunque la tendencia en las fuerzas de segundo orden se captura bien con los códigos numéricos, se observan algunas reducciones en comparación con los datos experimentales. Las diferencias entre simulaciones y datos experimentales son mayores al usar la aproximación de Newman, que usa únicamente resultados de primer orden para el cálculo de las fuerzas de deriva media. 11. Es importante remarcar que las tendencias observadas en los resultados con modelo fijo cambiarn cuando el modelo este libre, el impacto que los errores en las estimaciones de fuerzas segundo orden tienen en el sistema de fondeo dependen de las condiciones ambientales que imponen las cargas ultimas en dichas líneas. En cualquier caso los resultados que se han obtenido en esta investigación confirman que es necesaria y deseable una detallada investigación de los métodos usados en la estimación de las fuerzas no lineales en las turbinas flotantes para que pueda servir de guía en futuros diseños de estos sistemas. Finalmente, el candidato espera que esta investigación pueda beneficiar a la industria eólica offshore en mejorar el diseño hidrodinámico del concepto semisumergible. ABSTRACT Electrical power obtained from floating offshore wind turbines is one of the promising resources which can reduce the fossil fuel energy consumption and cover worldwide energy demands. The concept is the most competitive in countries, such as Spain, where the continental shelf is narrow and does not provide space for fixed structures. Among the different floating structures concepts, this thesis has dealt with the semisubmersible one. Platforms of this kind may experience resonant motions both in surge and heave directions. In surge, since the platform natural period is long, such resonance can be excited with second order slow drift forces and may have substantial influence on mooring loads. In heave, first order forces can induce significant motion, whose damping is a crucial factor for the platform downtime. These two topics have been investigated in this thesis. To this aim, a design developed during HiPRWind EU project, has been selected as reference case study. The platform is composed of three cylindrical legs, linked together by a set of structural braces. The cylinders provide buoyancy and restoring forces and moments. Large circular heave plates have been attached to their bases. The design is similar to other documented in literature (e.g. Windfloat), which implies outcomes could have a general value. A large scale model of one of the legs has been built in order to study heave damping through forced oscillations. The final dimensions of the specimen (one meter diameter discs) make it, to the candidate’s knowledge, the largest for which data has been published. The model design allows for the fitting of either a plain solid heave plate or a flapped reinforced one; both have been built. The latter is a model scale reproduction of the prototype heave plate and includes some distinctive features, the most important being the inclusion of a vertical flap on its perimeter. The forced oscillation tests have been conducted for a range of frequencies and amplitudes, with both the solid plain model and the vertical flap one. Forces have been measured, from which added mass and damping coefficients have been obtained. These are necessary to accurately compute time-domain simulations of mooring design. The coefficients have been compared with literature, and potential flow and CFD predictions. In order to provide information for the structural design of the platform, pressure measurements on the top and bottom side of the heave discs have been recorded and pressure differences analyzed. In addition, in order to conduct a detailed investigation on the numerical estimations of the slow-drift forces of the HiPRWind platform, an experimental campaign involving captive (fixed) model tests of a model of the whole platform in bichromatic waves has been carried out. Although not reproducing the more realistic scenario, these tests allowed a preliminary verification of the numerical model based directly on the forces measured on the structure. The following outcomes can be enumerated: 1. Damping and added mass coefficients show, on one hand, a small dependence with frequency and, on the other hand, a large dependence with the motion amplitude, which is coherent with previously published research. 2. Measurements with the prototype plate, equipped with the vertical flap, show that damping drops significantly when comparing this to the plain one. This implies that, for tank tests of the whole floater and turbine, the prototype plate, equipped with the flap, should be incorporated to the model. 3. Added mass values do not suffer large alterations when comparing the plain plate and the one equipped with a vertical flap. 4. A conservative damping coefficient equal to 6% of the critical damping can be considered adequate for the prototype heave plate for frequency domain analysis. A corresponding drag coefficient equal to 4.0 can be used in time domain simulations to define Morison elements. 5. When comparing to published data, some discrepancies in added mass and damping coefficients for the solid plain plate have been found. Explanations have been suggested, focusing mainly on differences in thickness ratio and distance to the free surface, and eventual scale effects. 6. Pressures on the plate equipped with the vertical flap are similar in magnitude to those of the plain plate, even though substantial differences are present close to the edge, where the flap induces a larger pressure difference in the reinforced case. 7. The maximum pressure difference scales coherently with the force equivalent to the acceleration of the added mass, distributed over the disc surface. 8. Added mass coefficient values predicted with the potential solver (WADAM) are not accurate enough. The used solver does not contemplate modeling thin plates with doublets. The relatively low accuracy of the results highlights the importance of these elements when performing potential flow simulations of offshore platforms which include thin plates. 9. For the full CFD solver (Ansys CFX), the accuracy of the computations is found reasonable for the plain plate. Such accuracy diminishes for the disc equipped with a vertical flap, an expected result considering the greater complexity of the flow. 10. In regards to second order effects, in general, the results showed that, although the main trend in the behavior of the second-order forces is well captured by the numerical predictions, some under prediction of the experimental values is visible. The gap between experimental and numerical results is more pronounced when Newman’s approximation is considered, making use exclusively of the mean drift forces calculated in the first-order solution. 11. It should be observed that the trends observed in the fixed model test may change when the body is free to float, and the impact that eventual errors in the estimation of the second-order forces may have on the mooring system depends on the characteristics of the sea conditions that will ultimately impose the maximum loads on the mooring lines. Nevertheless, the preliminary results obtained in this research do confirm that a more detailed investigation of the methods adopted for the estimation of the nonlinear wave forces on the FOWT would be welcome and may provide some further guidance for the design of such systems. As a final remark, the candidate hopes this research can benefit the offshore wind industry in improving the hydrodynamic design of the semi-submersible concept.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La mejora de la calidad del aire es una tarea eminentemente interdisciplinaria. Dada la gran variedad de ciencias y partes involucradas, dicha mejora requiere de herramientas de evaluación simples y completamente integradas. La modelización para la evaluación integrada (integrated assessment modeling) ha demostrado ser una solución adecuada para la descripción de los sistemas de contaminación atmosférica puesto que considera cada una de las etapas involucradas: emisiones, química y dispersión atmosférica, impactos ambientales asociados y potencial de disminución. Varios modelos de evaluación integrada ya están disponibles a escala continental, cubriendo cada una de las etapas antesmencionadas, siendo el modelo GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies) el más reconocido y usado en el contexto europeo de toma de decisiones medioambientales. Sin embargo, el manejo de la calidad del aire a escala nacional/regional dentro del marco de la evaluación integrada es deseable. Esto sin embargo, no se lleva a cabo de manera satisfactoria con modelos a escala europea debido a la falta de resolución espacial o de detalle en los datos auxiliares, principalmente los inventarios de emisión y los patrones meteorológicos, entre otros. El objetivo de esta tesis es presentar los desarrollos en el diseño y aplicación de un modelo de evaluación integrada especialmente concebido para España y Portugal. El modelo AERIS (Atmospheric Evaluation and Research Integrated system for Spain) es capaz de cuantificar perfiles de concentración para varios contaminantes (NO2, SO2, PM10, PM2,5, NH3 y O3), el depósito atmosférico de especies de azufre y nitrógeno así como sus impactos en cultivos, vegetación, ecosistemas y salud como respuesta a cambios porcentuales en las emisiones de sectores relevantes. La versión actual de AERIS considera 20 sectores de emisión, ya sea equivalentes a sectores individuales SNAP o macrosectores, cuya contribución a los niveles de calidad del aire, depósito e impactos han sido modelados a través de matrices fuentereceptor (SRMs). Estas matrices son constantes de proporcionalidad que relacionan cambios en emisiones con diferentes indicadores de calidad del aire y han sido obtenidas a través de parametrizaciones estadísticas de un modelo de calidad del aire (AQM). Para el caso concreto de AERIS, su modelo de calidad del aire “de origen” consistió en el modelo WRF para la meteorología y en el modelo CMAQ para los procesos químico-atmosféricos. La cuantificación del depósito atmosférico, de los impactos en ecosistemas, cultivos, vegetación y salud humana se ha realizado siguiendo las metodologías estándar establecidas bajo los marcos internacionales de negociación, tales como CLRTAP. La estructura de programación está basada en MATLAB®, permitiendo gran compatibilidad con software típico de escritorio comoMicrosoft Excel® o ArcGIS®. En relación con los niveles de calidad del aire, AERIS es capaz de proveer datos de media anual y media mensual, así como el 19o valor horario más alto paraNO2, el 25o valor horario y el 4o valor diario más altos para SO2, el 36o valor diario más alto para PM10, el 26o valor octohorario más alto, SOMO35 y AOT40 para O3. En relación al depósito atmosférico, el depósito acumulado anual por unidad de area de especies de nitrógeno oxidado y reducido al igual que de azufre pueden ser determinados. Cuando los valores anteriormente mencionados se relacionan con características del dominio modelado tales como uso de suelo, cubiertas vegetales y forestales, censos poblacionales o estudios epidemiológicos, un gran número de impactos puede ser calculado. Centrándose en los impactos a ecosistemas y suelos, AERIS es capaz de estimar las superaciones de cargas críticas y las superaciones medias acumuladas para especies de nitrógeno y azufre. Los daños a bosques se calculan como una superación de los niveles críticos de NO2 y SO2 establecidos. Además, AERIS es capaz de cuantificar daños causados por O3 y SO2 en vid, maíz, patata, arroz, girasol, tabaco, tomate, sandía y trigo. Los impactos en salud humana han sido modelados como consecuencia de la exposición a PM2,5 y O3 y cuantificados como pérdidas en la esperanza de vida estadística e indicadores de mortalidad prematura. La exactitud del modelo de evaluación integrada ha sido contrastada estadísticamente con los resultados obtenidos por el modelo de calidad del aire convencional, exhibiendo en la mayoría de los casos un buen nivel de correspondencia. Debido a que la cuantificación de los impactos no es llevada a cabo directamente por el modelo de calidad del aire, un análisis de credibilidad ha sido realizado mediante la comparación de los resultados de AERIS con los de GAINS para un escenario de emisiones determinado. El análisis reveló un buen nivel de correspondencia en las medias y en las distribuciones probabilísticas de los conjuntos de datos. Las pruebas de verificación que fueron aplicadas a AERIS sugieren que los resultados son suficientemente consistentes para ser considerados como razonables y realistas. En conclusión, la principal motivación para la creación del modelo fue el producir una herramienta confiable y a la vez simple para el soporte de las partes involucradas en la toma de decisiones, de cara a analizar diferentes escenarios “y si” con un bajo coste computacional. La interacción con políticos y otros actores dictó encontrar un compromiso entre la complejidad del modeladomedioambiental con el carácter conciso de las políticas, siendo esto algo que AERIS refleja en sus estructuras conceptual y computacional. Finalmente, cabe decir que AERIS ha sido creado para su uso exclusivo dentro de un marco de evaluación y de ninguna manera debe ser considerado como un sustituto de los modelos de calidad del aire ordinarios. ABSTRACT Improving air quality is an eminently inter-disciplinary task. The wide variety of sciences and stakeholders that are involved call for having simple yet fully-integrated and reliable evaluation tools available. Integrated AssessmentModeling has proved to be a suitable solution for the description of air pollution systems due to the fact that it considers each of the involved stages: emissions, atmospheric chemistry, dispersion, environmental impacts and abatement potentials. Some integrated assessment models are available at European scale that cover each of the before mentioned stages, being the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model the most recognized and widely-used within a European policy-making context. However, addressing air quality at the national/regional scale under an integrated assessment framework is desirable. To do so, European-scale models do not provide enough spatial resolution or detail in their ancillary data sources, mainly emission inventories and local meteorology patterns as well as associated results. The objective of this dissertation is to present the developments in the design and application of an Integrated Assessment Model especially conceived for Spain and Portugal. The Atmospheric Evaluation and Research Integrated system for Spain (AERIS) is able to quantify concentration profiles for several pollutants (NO2, SO2, PM10, PM2.5, NH3 and O3), the atmospheric deposition of sulfur and nitrogen species and their related impacts on crops, vegetation, ecosystems and health as a response to percentual changes in the emissions of relevant sectors. The current version of AERIS considers 20 emission sectors, either corresponding to individual SNAP sectors or macrosectors, whose contribution to air quality levels, deposition and impacts have been modeled through the use of source-receptor matrices (SRMs). Thesematrices are proportionality constants that relate emission changes with different air quality indicators and have been derived through statistical parameterizations of an air qualitymodeling system (AQM). For the concrete case of AERIS, its parent AQM relied on the WRF model for meteorology and on the CMAQ model for atmospheric chemical processes. The quantification of atmospheric deposition, impacts on ecosystems, crops, vegetation and human health has been carried out following the standard methodologies established under international negotiation frameworks such as CLRTAP. The programming structure isMATLAB ® -based, allowing great compatibility with typical software such as Microsoft Excel ® or ArcGIS ® Regarding air quality levels, AERIS is able to provide mean annual andmean monthly concentration values, as well as the indicators established in Directive 2008/50/EC, namely the 19th highest hourly value for NO2, the 25th highest daily value and the 4th highest hourly value for SO2, the 36th highest daily value of PM10, the 26th highest maximum 8-hour daily value, SOMO35 and AOT40 for O3. Regarding atmospheric deposition, the annual accumulated deposition per unit of area of species of oxidized and reduced nitrogen as well as sulfur can be estimated. When relating the before mentioned values with specific characteristics of the modeling domain such as land use, forest and crops covers, population counts and epidemiological studies, a wide array of impacts can be calculated. When focusing on impacts on ecosystems and soils, AERIS is able to estimate critical load exceedances and accumulated average exceedances for nitrogen and sulfur species. Damage on forests is estimated as an exceedance of established critical levels of NO2 and SO2. Additionally, AERIS is able to quantify damage caused by O3 and SO2 on grapes, maize, potato, rice, sunflower, tobacco, tomato, watermelon and wheat. Impacts on human health aremodeled as a consequence of exposure to PM2.5 and O3 and quantified as losses in statistical life expectancy and premature mortality indicators. The accuracy of the IAM has been tested by statistically contrasting the obtained results with those yielded by the conventional AQM, exhibiting in most cases a good agreement level. Due to the fact that impacts cannot be directly produced by the AQM, a credibility analysis was carried out for the outputs of AERIS for a given emission scenario by comparing them through probability tests against the performance of GAINS for the same scenario. This analysis revealed a good correspondence in the mean behavior and the probabilistic distributions of the datasets. The verification tests that were applied to AERIS suggest that results are consistent enough to be credited as reasonable and realistic. In conclusion, the main reason thatmotivated the creation of this model was to produce a reliable yet simple screening tool that would provide decision and policy making support for different “what-if” scenarios at a low computing cost. The interaction with politicians and other stakeholders dictated that reconciling the complexity of modeling with the conciseness of policies should be reflected by AERIS in both, its conceptual and computational structures. It should be noted however, that AERIS has been created under a policy-driven framework and by no means should be considered as a substitute of the ordinary AQM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arch bridge structural solution has been known for centuries, in fact the simple nature of arch that require low tension and shear strength was an advantage as the simple materials like stone and brick were the only option back in ancient centuries. By the pass of time especially after industrial revolution, the new materials were adopted in construction of arch bridges to reach longer spans. Nowadays one long span arch bridge is made of steel, concrete or combination of these two as "CFST", as the result of using these high strength materials, very long spans can be achieved. The current record for longest arch belongs to Chaotianmen bridge over Yangtze river in China with 552 meters span made of steel and the longest reinforced concrete type is Wanxian bridge which also cross the Yangtze river through a 420 meters span. Today the designer is no longer limited by span length as long as arch bridge is the most applicable solution among other approaches, i.e. cable stayed and suspended bridges are more reasonable if very long span is desired. Like any super structure, the economical and architectural aspects in construction of a bridge is extremely important, in other words, as a narrower bridge has better appearance, it also require smaller volume of material which make the design more economical. Design of such bridge, beside the high strength materials, requires precise structural analysis approaches capable of integrating the combination of material behaviour and complex geometry of structure and various types of loads which may be applied to bridge during its service life. Depend on the design strategy, analysis may only evaluates the linear elastic behaviour of structure or consider the nonlinear properties as well. Although most of structures in the past were designed to act in their elastic range, the rapid increase in computational capacity allow us to consider different sources of nonlinearities in order to achieve a more realistic evaluations where the dynamic behaviour of bridge is important especially in seismic zones where large movements may occur or structure experience P - _ effect during the earthquake. The above mentioned type of analysis is computationally expensive and very time consuming. In recent years, several methods were proposed in order to resolve this problem. Discussion of recent developments on these methods and their application on long span concrete arch bridges is the main goal of this research. Accordingly available long span concrete arch bridges have been studied to gather the critical information about their geometrical aspects and properties of their materials. Based on concluded information, several concrete arch bridges were designed for further studies. The main span of these bridges range from 100 to 400 meters. The Structural analysis methods implemented in in this study are as following: Elastic Analysis: Direct Response History Analysis (DRHA): This method solves the direct equation of motion over time history of applied acceleration or imposed load in linear elastic range. Modal Response History Analysis (MRHA): Similar to DRHA, this method is also based on time history, but the equation of motion is simplified to single degree of freedom system and calculates the response of each mode independently. Performing this analysis require less time than DRHA. Modal Response Spectrum Analysis (MRSA): As it is obvious from its name, this method calculates the peak response of structure for each mode and combine them using modal combination rules based on the introduced spectra of ground motion. This method is expected to be fastest among Elastic analysis. Inelastic Analysis: Nonlinear Response History Analysis (NL-RHA): The most accurate strategy to address significant nonlinearities in structural dynamics is undoubtedly the nonlinear response history analysis which is similar to DRHA but extended to inelastic range by updating the stiffness matrix for every iteration. This onerous task, clearly increase the computational cost especially for unsymmetrical buildings that requires to be analyzed in a full 3D model for taking the torsional effects in to consideration. Modal Pushover Analysis (MPA): The Modal Pushover Analysis is basically the MRHA but extended to inelastic stage. After all, the MRHA cannot solve the system of dynamics because the resisting force fs(u; u_ ) is unknown for inelastic stage. The solution of MPA for this obstacle is using the previously recorded fs to evaluate system of dynamics. Extended Modal Pushover Analysis (EMPA): Expanded Modal pushover is a one of very recent proposed methods which evaluates response of structure under multi-directional excitation using the modal pushover analysis strategy. In one specific mode,the original pushover neglect the contribution of the directions different than characteristic one, this is reasonable in regular symmetric building but a structure with complex shape like long span arch bridges may go through strong modal coupling. This method intend to consider modal coupling while it take same time of computation as MPA. Coupled Nonlinear Static Pushover Analysis (CNSP): The EMPA includes the contribution of non-characteristic direction to the formal MPA procedure. However the static pushovers in EMPA are performed individually for every mode, accordingly the resulted values from different modes can be combined but this is only valid in elastic phase; as soon as any element in structure starts yielding the neutral axis of that section is no longer fixed for both response during the earthquake, meaning the longitudinal deflection unavoidably affect the transverse one or vice versa. To overcome this drawback, the CNSP suggests executing pushover analysis for governing modes of each direction at the same time. This strategy is estimated to be more accurate than MPA and EMPA, moreover the calculation time is reduced because only one pushover analysis is required. Regardless of the strategy, the accuracy of structural analysis is highly dependent on modelling and numerical integration approaches used in evaluation of each method. Therefore the widely used Finite Element Method is implemented in process of all analysis performed in this research. In order to address the study, chapter 2, starts with gathered information about constructed long span arch bridges, this chapter continuous with geometrical and material definition of new models. Chapter 3 provides the detailed information about structural analysis strategies; furthermore the step by step description of procedure of all methods is available in Appendix A. The document ends with the description of results and conclusion of chapter 4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El método de Muskingum-Cunge, con más de 45 años de historia, sigue siendo uno de los más empleados a la hora de calcular el tránsito en un cauce. Una vez calibrado, permite realizar cálculos precisos, siendo asimismo mucho más rápido que los métodos que consideran las ecuaciones completas. Por esta razón, en el presente trabajo de investigación se llevó a cabo un análisis de su precisión, comparándolo con los resultados de un modelo hidráulico bidimensional. En paralelo se llevó a cabo un análisis de sus limitaciones y se ensayó una metodología práctica de aplicación. Con esta motivación se llevaron a cabo más de 200 simulaciones de tránsito en cauces prismáticos y naturales. Los cálculos se realizaron empleando el programa HEC-HMS con el método de Muskingum-Cunge de sección de 8 puntos, así como con la herramienta de cálculo hidráulico bidimensional InfoWorks ICM. Se eligieron HEC-HMS por su gran difusión e InfoWorks ICM por su rapidez de cálculo, pues emplea la tecnología CUDA (Arquitectura Unificada de Dispositivos de Cálculo). Inicialmente se validó el modelo hidráulico bidimensional contrastándolo con la formulación unidimensional en régimen uniforme y variado, así como con fórmulas analíticas de régimen variable, consiguiéndose resultados muy satisfactorios. También se llevó a cabo un análisis de la sensibilidad al mallado del modelo bidimensional aplicado a tránsitos, obteniéndose unos ábacos con tamaños recomendados de los elementos 2D que cuantifican el error cometido. Con la técnica del análisis dimensional se revisó una correlación de los resultados obtenidos entre ambos métodos, ponderando su precisión y definiendo intervalos de validez para la mejor utilización del método de Muskingum-Cunge. Simultáneamente se desarrolló una metodología que permite obtener la sección característica media de 8 puntos para el cálculo de un tránsito, basándose en una serie de simulaciones bidimensionales simplificadas. De este modo se pretende facilitar el uso y la correcta definición de los modelos hidrológicos. The Muskingum-Cunge methodology, which has been used for more 45 than years, is still one of the main procedures to calculate stream routing. Once calibrated, it gives precise results, and it is also much faster than other methods that consider the full hydraulic equations. Therefore, in the present investigation an analysis of its accuracy was carried out by comparing it with the results of a two-dimensional hydraulic model. At the same time, reasonable ranges of applicability as well as an iterative method for its adequate use were defined. With this motivation more than 200 simulations of stream routing were conducted in both synthetic and natural waterways. Calculations were performed with the aid of HEC-HMS choosing the Muskingum-Cunge 8 point cross-section method and in InfoWorks ICM, a two-dimensional hydraulic calculation software. HEC-HMS was chosen because its extensive use and InfoWorks ICM for its calculation speed as it takes advantage of the CUDA technology (Compute Unified Device Architecture). Initially, the two-dimensional hydraulic engine was compared to one-dimensional formulation in both uniform and varied flow. Then it was contrasted to variable flow analytical formulae, achieving most satisfactory results. A sensitivity size analysis of the two-dimensional rooting model mesh was also conduced, obtaining charts with suggested 2D element sizes to narrow the committed error. With the technique of dimensional analysis a correlation of results between the two methods was reviewed, assessing their accuracy and defining valid intervals for improved use of the Muskingum-Cunge method. Simultaneously, a methodology to draw a representative 8 point cross-section was developed, based on a sequence of simplified two-dimensional simulations. This procedure is intended to provide a simplified approach and accurate definition of hydrological models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El diseño de una antena reflectarray bajo la aproximación de periodicidad local requiere la determinación de la matriz de scattering de estructuras multicapa con metalizaciones periódicas para un gran número de geometrías diferentes. Por lo tanto, a la hora de diseñar antenas reflectarray en tiempos de CPU razonables, se necesitan herramientas númericas rápidas y precisas para el análisis de las estructuras periódicas multicapa. En esta tesis se aplica la versión Galerkin del Método de los Momentos (MDM) en el dominio espectral al análisis de las estructuras periódicas multicapa necesarias para el diseño de antenas reflectarray basadas en parches apilados o en dipolos paralelos coplanares. Desgraciadamente, la aplicación de este método numérico involucra el cálculo de series dobles infinitas, y mientras que algunas series convergen muy rápidamente, otras lo hacen muy lentamente. Para aliviar este problema, en esta tesis se propone un novedoso MDM espectral-espacial para el análisis de las estructuras periódicas multicapa, en el cual las series rápidamente convergente se calculan en el dominio espectral, y las series lentamente convergentes se calculan en el dominio espacial mediante una versión mejorada de la formulación de ecuaciones integrales de potenciales mixtos (EIPM) del MDM. Esta versión mejorada se basa en la interpolación eficiente de las funciones de Green multicapa periódicas, y en el cálculo eficiente de las integrales singulares que conducen a los elementos de la matriz del MDM. El novedoso método híbrido espectral-espacial y el tradicional MDM en el dominio espectral se han comparado en el caso de los elementos reflectarray basado en parches apilados. Las simulaciones numéricas han demostrado que el tiempo de CPU requerido por el MDM híbrido es alrededor de unas 60 veces más rápido que el requerido por el tradicional MDM en el dominio espectral para una precisión de dos cifras significativas. El uso combinado de elementos reflectarray con parches apilados y técnicas de optimización de banda ancha ha hecho posible diseñar antenas reflectarray de transmisiónrecepción (Tx-Rx) y polarización dual para aplicaciones de espacio con requisitos muy restrictivos. Desgraciadamente, el nivel de aislamiento entre las polarizaciones ortogonales en antenas DBS (típicamente 30 dB) es demasiado exigente para ser conseguido con las antenas basadas en parches apilados. Además, el uso de elementos reflectarray con parches apilados conlleva procesos de fabricación complejos y costosos. En esta tesis se investigan varias configuraciones de elementos reflectarray basadas en conjuntos de dipolos paralelos con el fin de superar los inconvenientes que presenta el elemento basado en parches apilados. Primeramente, se propone un elemento consistente en dos conjuntos apilados ortogonales de tres dipolos paralelos para aplicaciones de polarización dual. Se ha diseñado, fabricado y medido una antena basada en este elemento, y los resultados obtenidos para la antena indican que tiene unas altas prestaciones en términos de ancho de banda, pérdidas, eficiencia y discriminación contrapolar, además de requerir un proceso de fabricación mucho más sencillo que el de las antenas basadas en tres parches apilados. Desgraciadamente, el elemento basado en dos conjuntos ortogonales de tres dipolos paralelos no proporciona suficientes grados de libertad para diseñar antenas reflectarray de transmisión-recepción (Tx-Rx) de polarización dual para aplicaciones de espacio por medio de técnicas de optimización de banda ancha. Por este motivo, en la tesis se propone un nuevo elemento reflectarray que proporciona los grados de libertad suficientes para cada polarización. El nuevo elemento consiste en dos conjuntos ortogonales de cuatro dipolos paralelos. Cada conjunto contiene tres dipolos coplanares y un dipolo apilado. Para poder acomodar los dos conjuntos de dipolos en una sola celda de la antena reflectarray, el conjunto de dipolos de una polarización está desplazado medio período con respecto al conjunto de dipolos de la otra polarización. Este hecho permite usar solamente dos niveles de metalización para cada elemento de la antena, lo cual simplifica el proceso de fabricación como en el caso del elemento basados en dos conjuntos de tres dipolos paralelos coplanares. Una antena de doble polarización y doble banda (Tx-Rx) basada en el nuevo elemento ha sido diseñada, fabricada y medida. La antena muestra muy buenas presentaciones en las dos bandas de frecuencia con muy bajos niveles de polarización cruzada. Simulaciones numéricas presentadas en la tesis muestran que estos bajos de niveles de polarización cruzada se pueden reducir todavía más si se llevan a cabo pequeñas rotaciones de los dos conjuntos de dipolos asociados a cada polarización. ABSTRACT The design of a reflectarray antenna under the local periodicity assumption requires the determination of the scattering matrix of a multilayered structure with periodic metallizations for quite a large number of different geometries. Therefore, in order to design reflectarray antennas within reasonable CPU times, fast and accurate numerical tools for the analysis of the periodic multilayered structures are required. In this thesis the Galerkin’s version of the Method of Moments (MoM) in the spectral domain is applied to the analysis of the periodic multilayered structures involved in the design of reflectarray antennas made of either stacked patches or coplanar parallel dipoles. Unfortunately, this numerical approach involves the computation of double infinite summations, and whereas some of these summations converge very fast, some others converge very slowly. In order to alleviate this problem, in the thesis a novel hybrid MoM spectral-spatial domain approach is proposed for the analysis of the periodic multilayered structures. In the novel approach, whereas the fast convergent summations are computed in the spectral domain, the slowly convergent summations are computed by means of an enhanced Mixed Potential Integral Equation (MPIE) formulation of the MoM in the spatial domain. This enhanced formulation is based on the efficient interpolation of the multilayered periodic Green’s functions, and on the efficient computation of the singular integrals leading to the MoM matrix entries. The novel hybrid spectral-spatial MoM code and the standard spectral domain MoM code have both been compared in the case of reflectarray elements based on multilayered stacked patches. Numerical simulations have shown that the CPU time required by the hybrid MoM is around 60 times smaller than that required by the standard spectral MoM for an accuracy of two significant figures. The combined use of reflectarray elements based on stacked patches and wideband optimization techniques has made it possible to design dual polarization transmit-receive (Tx-Rx) reflectarrays for space applications with stringent requirements. Unfortunately, the required level of isolation between orthogonal polarizations in DBS antennas (typically 30 dB) is hard to achieve with the configuration of stacked patches. Moreover, the use of reflectarrays based on stacked patches leads to a complex and expensive manufacturing process. In this thesis, we investigate several configurations of reflectarray elements based on sets of parallel dipoles that try to overcome the drawbacks introduced by the element based on stacked patches. First, an element based on two stacked orthogonal sets of three coplanar parallel dipoles is proposed for dual polarization applications. An antenna made of this element has been designed, manufactured and measured, and the results obtained show that the antenna presents a high performance in terms of bandwidth, losses, efficiency and cross-polarization discrimination, while the manufacturing process is cheaper and simpler than that of the antennas made of stacked patches. Unfortunately, the element based on two sets of three coplanar parallel dipoles does not provide enough degrees of freedom to design dual-polarization transmit-receive (Tx-Rx) reflectarray antennas for space applications by means of wideband optimization techniques. For this reason, in the thesis a new reflectarray element is proposed which does provide enough degrees of freedom for each polarization. This new element consists of two orthogonal sets of four parallel dipoles, each set containing three coplanar dipoles and one stacked dipole. In order to accommodate the two sets of dipoles in each reflectarray cell, the set of dipoles for one polarization is shifted half a period from the set of dipoles for the other polarization. This also makes it possible to use only two levels of metallization for the reflectarray element, which simplifies the manufacturing process as in the case of the reflectarray element based on two sets of three parallel dipoles. A dual polarization dual-band (Tx-Rx) reflectarray antenna based on the new element has been designed, manufactured and measured. The antenna shows a very good performance in both Tx and Rx frequency bands with very low levels of cross-polarization. Numerical simulations carried out in the thesis have shown that the low levels of cross-polarization can be even made smaller by means of small rotations of the two sets of dipoles associated to each polarization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los recientes desarrollos tecnológicos permiten la transición de la oceanografía observacional desde un concepto basado en buques a uno basado en sistemas autónomos en red. Este último, propone que la forma más eficiente y efectiva de observar el océano es con una red de plataformas autónomas distribuidas espacialmente y complementadas con sistemas de medición remota. Debido a su maniobrabilidad y autonomía, los planeadores submarinos están jugando un papel relevante en este concepto de observaciones en red. Los planeadores submarinos fueron específicamente diseñados para muestrear vastas zonas del océano. Estos son robots con forma de torpedo que hacen uso de su forma hidrodinámica, alas y cambios de flotabilidad para generar movimientos horizontales y verticales en la columna de agua. Un sensor que mide conductividad, temperatura y profundidad (CTD) constituye un equipamiento estándar en la plataforma. Esto se debe a que ciertas variables dinámicas del Océano se pueden derivar de la temperatura, profundidad y salinidad. Esta última se puede estimar a partir de las medidas de temperatura y conductividad. La integración de sensores CTD en planeadores submarinos no esta exenta de desafíos. Uno de ellos está relacionado con la precisión de los valores de salinidad derivados de las muestras de temperatura y conductividad. Específicamente, las estimaciones de salinidad están significativamente degradadas por el retardo térmico existente, entre la temperatura medida y la temperatura real dentro de la celda de conductividad del sensor. Esta deficiencia depende de las particularidades del flujo de entrada al sensor, su geometría y, también se ha postulado, del calor acumulado en las capas de aislamiento externo del sensor. Los efectos del retardo térmico se suelen mitigar mediante el control del flujo de entrada al sensor. Esto se obtiene generalmente mediante el bombeo de agua a través del sensor o manteniendo constante y conocida su velocidad. Aunque recientemente se han incorporado sistemas de bombeo en los CTDs a bordo de los planeadores submarinos, todavía existen plataformas equipadas con CTDs sin dichos sistemas. En estos casos, la estimación de la salinidad supone condiciones de flujo de entrada al sensor, razonablemente controladas e imperturbadas. Esta Tesis investiga el impacto, si existe, que la hidrodinámica de los planeadores submarinos pudiera tener en la eficiencia de los sensores CTD. Específicamente, se investiga primero la localización del sensor CTD (externo al fuselaje) relativa a la capa límite desarrollada a lo largo del cuerpo del planeador. Esto se lleva a cabo mediante la utilización de un modelo acoplado de fluido no viscoso con un modelo de capa límite implementado por el autor, así como mediante un programa comercial de dinámica de fluidos computacional (CFD). Los resultados indican, en ambos casos, que el sensor CTD se encuentra fuera de la capa límite, siendo las condiciones del flujo de entrada las mismas que las del flujo sin perturbar. Todavía, la velocidad del flujo de entrada al sensor CTD es la velocidad de la plataforma, la cual depende de su hidrodinámica. Por tal motivo, la investigación se ha extendido para averiguar el efecto que la velocidad de la plataforma tiene en la eficiencia del sensor CTD. Con este propósito, se ha desarrollado un modelo en elementos finitos del comportamiento hidrodinámico y térmico del flujo dentro del CTD. Los resultados numéricos indican que el retardo térmico, atribuidos originalmente a la acumulación de calor en la estructura del sensor, se debe fundamentalmente a la interacción del flujo que atraviesa la celda de conductividad con la geometría interna de la misma. Esta interacción es distinta a distintas velocidades del planeador submarino. Específicamente, a velocidades bajas del planeador (0.2 m/s), la mezcla del flujo entrante con las masas de agua remanentes en el interior de la celda, se ralentiza debido a la generación de remolinos. Se obtienen entonces desviaciones significantes entre la salinidad real y aquella estimada. En cambio, a velocidades más altas del planeador (0.4 m/s) los procesos de mezcla se incrementan debido a la turbulencia e inestabilidades. En consecuencia, la respuesta del sensor CTD es mas rápida y las estimaciones de la salinidad mas precisas que en el caso anterior. Para completar el trabajo, los resultados numéricos se han validado con pruebas experimentales. Específicamente, se ha construido un modelo a escala del sensor CTD para obtener la confirmación experimental de los modelos numéricos. Haciendo uso del principio de similaridad de la dinámica que gobierna los fluidos incompresibles, los experimentos se han realizado con flujos de aire. Esto simplifica significativamente la puesta experimental y facilita su realización en condiciones con medios limitados. Las pruebas experimentales han confirmado cualitativamente los resultados numéricos. Más aun, se sugiere en esta Tesis que la respuesta del sensor CTD mejoraría significativamente añadiendo un generador de turbulencia en localizaciones adecuadas al interno de la celda de conductividad. ABSTRACT Recent technological developments allow the transition of observational oceanography from a ship-based to a networking concept. The latter suggests that the most efficient and effective way to observe the Ocean is through a fleet of spatially distributed autonomous platforms complemented by remote sensing. Due to their maneuverability, autonomy and endurance at sea, underwater gliders are already playing a significant role in this networking observational approach. Underwater gliders were specifically designed to sample vast areas of the Ocean. These are robots with a torpedo shape that make use of their hydrodynamic shape, wings and buoyancy changes to induce horizontal and vertical motions through the water column. A sensor to measure the conductivity, temperature and depth (CTD) is a standard payload of this platform. This is because certain ocean dynamic variables can be derived from temperature, depth and salinity. The latter can be inferred from measurements of temperature and conductivity. Integrating CTD sensors in glider platforms is not exempted of challenges. One of them, concerns to the accuracy of the salinity values derived from the sampled conductivity and temperature. Specifically, salinity estimates are significantly degraded by the thermal lag response existing between the measured temperature and the real temperature inside the conductivity cell of the sensor. This deficiency depends on the particularities of the inflow to the sensor, its geometry and, it has also been hypothesized, on the heat accumulated by the sensor coating layers. The effects of thermal lag are usually mitigated by controlling the inflow conditions through the sensor. Controlling inflow conditions is usually achieved by pumping the water through the sensor or by keeping constant and known its diving speed. Although pumping systems have been recently implemented in CTD sensors on board gliders, there are still platforms with unpumped CTDs. In the latter case, salinity estimates rely on assuming reasonable controlled and unperturbed flow conditions at the CTD sensor. This Thesis investigates the impact, if any, that glider hydrodynamics may have on the performance of onboard CTDs. Specifically, the location of the CTD sensor (external to the hull) relative to the boundary layer developed along the glider fuselage, is first investigated. This is done, initially, by applying a coupled inviscid-boundary layer model developed by the author, and later by using a commercial software for computational fluid dynamics (CFD). Results indicate, in both cases, that the CTD sensor is out of the boundary layer, being its inflow conditions those of the free stream. Still, the inflow speed to the CTD sensor is the speed of the platform, which largely depends on its hydrodynamic setup. For this reason, the research has been further extended to investigate the effect of the platform speed on the performance of the CTD sensor. A finite element model of the hydrodynamic and thermal behavior of the flow inside the CTD sensor, is developed for this purpose. Numerical results suggest that the thermal lag effect is mostly due to the interaction of the flow through the conductivity cell and its geometry. This interaction is different at different speeds of the glider. Specifically, at low glider speeds (0.2 m/s), the mixing of recent and old waters inside the conductivity cell is slowed down by the generation of coherent eddy structures. Significant departures between real and estimated values of the salinity are found. Instead, mixing is enhanced by turbulence and instabilities for high glider speeds (0.4 m/s). As a result, the thermal response of the CTD sensor is faster and the salinity estimates more accurate than for the low speed case. For completeness, numerical results have been validated against model tests. Specifically, a scaled model of the CTD sensor was built to obtain experimental confirmation of the numerical results. Making use of the similarity principle of the dynamics governing incompressible fluids, experiments are carried out with air flows. This significantly simplifies the experimental setup and facilitates its realization in a limited resource condition. Model tests qualitatively confirm the numerical findings. Moreover, it is suggested in this Thesis that the response of the CTD sensor would be significantly improved by adding small turbulators at adequate locations inside the conductivity cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La colonia experimental de Schorlemerallee y las villas Am Rupenhorn son dos proyectos concluidos en 1930 por los hermanos Wassili y Hans Luckhardt con Alfons Anker en Berlín. Ambos proyectos forman parte del mismo proceso, que comienza en la Colonia -una exploración sobre el lenguaje moderno en una serie de fases sucesivas- y culmina con las Villas. Éstas últimas, realizadas inmediatamente después de la Colonia, son la síntesis de esa experiencia, aunque finalmente acabaron trascendiéndola, ya que se convirtieron en un modelo sobre la casa en la naturaleza, sobre la idea de la villa clásica y sobre los nuevos modos de habitar, alcanzando con el tiempo la condición de canon moderno. A pesar de ello, no es esta condición lo más importante. Lo singular en este caso, es el propio proceso de proyecto –Colonia versus Villas- un verdadero experimento en su concepción, método y resultados, a través del cual sus autores investigan nuevas tecnologías aplicadas a nuevas formas de habitar y desarrollan un nuevo lenguaje, cuyo resultado son unos prototipos tecnológicos, con los que pretenden, como diría Mies van der Rohe: “Me he esforzado por construir una arquitectura para una sociedad tecnológica. He intentado que todo resultara razonable y claro.....para que cualquiera pueda hacer arquitectura.” El momento y lugar no pueden ser más propicios: Berlín entre 1924 y 1930, en el mismo origen del Movimiento Moderno. El experimento se plantea con auténtico rigor científico. Los arquitectos diseñan, construyen y financian su proyecto, controlando todas sus variables. Especialmente, por lo insólito, es el control de la variable económica. Porque este factor, la economía, es para ellos una clave fundamental del proceso. Se trataba de demostrar que la Nueva Arquitectura (o Neues Bauen, como les gustaba denominarla) era capaz de construir mejor y más rápido la vivienda para una nueva sociedad. La revolución y la vanguardia van de la mano: son el Zeitgeist o espíritu de la época, un contexto que es parte sustancial del proceso, y como lo calificarían los Smithson, un contexto heroico. El concepto se centra en la tríada Bauhaus: diseño + tecnología x economía. En cuanto al método, se fijan una serie de parámetros –las variables del experimento- que se agrupan en tres categorías distintas: topología, tipología y tecnología. La combinación de las variables de cada categoría dará lugar a un sistema con unas características determinadas: una definición del espacio, una forma, un lenguaje y una tecnología, características que permiten establecer las reglas para su desarrollo. Los sistemas resultantes son tres, denominados según su doble condición tipológica/ tecnológica: 1. Sistema de muro de carga: Viviendas adosadas en zig-zag o Mauerwerksbauten. 2. Sistema de esqueleto de acero: Viviendas aisladas o Stahlskelettbauten 3. Sistema de hormigón armado: Viviendas en hilera recta o Betonbauten Las villas Am Rupenhorn se plantean a continuación como verificación de este proceso: la síntesis de las categorías desarrolladas en la Colonia. Pero llegan en un momento de gracia, justo cuando los Luckhardt y Anker se encuentran profundamente implicados en el proceso de desarrollo de un nuevo lenguaje y con la reciente experiencia de la Colonia, que ha sido un éxito en casi todos los aspectos posibles. “En 1930, están en la cumbre”, como diría su mejor crítico y antiguo colaborador: Achim Wendschuh. En las Villas, los arquitectos integran su lenguaje, ya plenamente moderno, con sus experiencias previas: las que los relacionan con su reciente expresionismo (que se podría calificar como Kunstwollen) y con la tradición clásica de la cultura arquitectónica alemana: el sentido del material que deben a Semper y la sensibilidad hacia el paisaje, que toman de Schinkel. El extraordinario interés de las Villas se debe a factores como el tratamiento de la relación dual, poco habitual en la arquitectura moderna, la síntesis de lenguajes y las circunstancias de su momento histórico, factores que las han convertido en una propuesta única e irrepetible de una de las vías experimentales más interesantes y desconocidas de la Modernidad. ABSTRACT The experimental Housing Estate of Schorlemerallee and the Am Rupenhorn Villas are two projects completed by the brothers Wassili and Hans Luckhardt with Alfons Anker in Berlin in 1930. Both projects are part of the same process, starting with the Housing Estate --an exploration of the modern language in a series of phases- which culminates with the Villas project. The Villas Am Ruperhorn, designed immediately after the Housing development, are the synthesis and crowning point of this experience, even finally over passing it, since they have become a model of the house in nature, related with both the ideal of the classical villa and the new ways of life, reaching the condition of a modern canon. However, this is not its most important issue. The most remarkable condition is the project process itself -Housing versus Villas- a true experiment in concept, method and results, in which the authors research new technologies for new ways of living, developing an innovative language, with results in new prototypes, in the way Mies van der Rohe was looking for: “I have tried to make an architecture for a technological society. I have wanted to keep everything reasonable and clear… to have an architecture that anybody can do." The time and place could not be more favourable: Berlin from 1924 to 1930, in the very origin of Modern Movement. The experiment takes place with genuine scientific accuracy. Architects design, build and finance their own project, controlling all variables. Especially, and quite unusual, the control of the economic variable. Precisely the economic factor is for them a fundamental key to the process. It was shown to prove that the new architecture (or Neues Bauen, as they liked to call it) was able to build not only faster, better and more efficient dwellings for a new society, but also at lower cost. Revolution and Avant-garde use to move forward together, because they share the Zeitgeist --or time's spirit--, a context which is a substantial part of the process, and as the Alison & Peter Smithsons would describe, an heroic context. The concept focuses on the Bauhaus triad: Design + Technology x Economy. For the method, a number of variables are fixed --the experimental parameters-- that are later grouped into three distinct categories: Topology, Typology and Technology. The combination of these variables within each category gives way to several systems, with specific characteristics: a definition of space, a form, a language and a technology, thus allowing to establish the rules for its development: The resulting systems are three, called by double typological / technological issue: 1. Terraced Housing in zig-zag or Mauerwerksbauten (bearing wall system) 2. Detached Housing or Stahlskelettbauten (steel skeleton system) 3. Terraced Housing in one row or Betonbauten (reinforced concrete system) The Am Rupenhorn Villas are planned as the check of this process: the synthesis of the categories developed all through the Housing Estate research. The Am Ruperhorn project is developed in a crucial moment, just as the Luckhardts and Anker are deeply involved in the definition process of a new language after the recent experience of Schorlemerallee, which has been a success in almost all possible aspects. "In 1930, they are on the top” has said his best critic and long-time collaborator, Achim Wendschuh. In the Villas, the authors make up their fully modern language with their own background, related with their recent Expressionist trend (Kunstwollen) and with the classical tradition of the German architectural culture: the notion of material related with Semper and the sensible approach to the landscape, linked with Schinkel. Its extraordinary interest lay on diverse factors, such as dual relationships, unusual in modern architecture, synthesis of languages and circumstances of their historical moment, all factors that have become a unique and unrepeatable proposal in one of the most extraordinary experimental ways of Modernity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.