19 resultados para Free cash flow to the firme
em Universidad Politécnica de Madrid
Resumo:
La energía eólica marina es uno de los recursos energéticos con mayor proyección pudiendo contribuir a reducir el consumo de combustibles fósiles y a cubrir la demanda de energía en todo el mundo. El concepto de aerogenerador marino está basado en estructuras fijas como jackets o en plataformas flotantes, ya sea una semisumergible o una TLP. Se espera que la energía eólica offshore juegue un papel importante en el perfil de producción energética de los próximos años; por tanto, las turbinas eólicas deben hacerse más fables y rentables para ser competitivas frente a otras fuentes de energía. Las estructuras flotantes pueden experimentar movimientos resonantes en estados de la mar con largos períodos de oleaje. Estos movimientos disminuyen su operatividad y pueden causar daños en los componentes eléctricos de las turbinas y en las palas, también en los risers y moorings. La respuesta de la componente vertical del movimiento puede reducirse mediante diferentes actuaciones: (1) aumentando la amortiguación del sistema, (2) manteniendo el período del movimiento vertical fuera del rango de la energía de la ola, y (3) reduciendo las fuerzas de excitación verticales. Un ejemplo típico para llevar a cabo esta reducción son las "Heave Plates". Las heave plates son placas que se utilizan en la industria offshore debido a sus características hidrodinámicas, ya que aumentan la masa añadida y la amortiguación del sistema. En un análisis hidrodinámico convencional, se considera una estructura sometida a un oleaje con determinadas características y se evalúan las cargas lineales usando la teoría potencial. El amortiguamiento viscoso, que juega un papel crucial en la respuesta en resonancia del sistema, es un dato de entrada para el análisis. La tesis se centra principalmente en la predicción del amortiguamiento viscoso y de la masa añadida de las heave plates usadas en las turbinas eólicas flotantes. En los cálculos, las fuerzas hidrodinámicas se han obtenido con el f n de estudiar cómo los coeficientes hidrodinámicos de masa añadida5 y amortiguamiento varían con el número de KC, que caracteriza la amplitud del movimiento respecto al diámetro del disco. Por otra parte, se ha investigado la influencia de la distancia media de la ‘heave plate’ a la superficie libre o al fondo del mar, sobre los coeficientes hidrodinámicos. En este proceso, un nuevo modelo que describe el trabajo realizado por la amortiguación en función de la enstrofía, es descrito en el presente documento. Este nuevo enfoque es capaz de proporcionar una correlación directa entre el desprendimiento local de vorticidad y la fuerza de amortiguación global. El análisis también incluye el estudio de los efectos de la geometría de la heave plate, y examina la sensibilidad de los coeficientes hidrodinámicos al incluir porosidad en ésta. Un diseño novedoso de una heave plate, basado en la teoría fractal, también fue analizado experimentalmente y comparado con datos experimentales obtenidos por otros autores. Para la resolución de las ecuaciones de Navier Stokes se ha usado un solver basado en el método de volúmenes finitos. El solver usa las librerías de OpenFOAM (Open source Field Operation And Manipulation), para resolver un problema multifásico e incompresible, usando la técnica VOF (volume of fluid) que permite capturar el movimiento de la superficie libre. Los resultados numéricos han sido comparados con resultados experimentales llevados a cabo en el Canal del Ensayos Hidrodinámicos (CEHINAV) de la Universidad Politécnica de Madrid y en el Canal de Experiencias Hidrodinámicas (CEHIPAR) en Madrid, al igual que con otros experimentos realizados en la Escuela de Ingeniería Mecánica de la Universidad de Western Australia. Los principales resultados se presentan a continuación: 1. Para pequeños valores de KC, los coeficientes hidrodinámicos de masa añadida y amortiguamiento incrementan su valor a medida que el disco se aproxima al fondo marino. Para los casos cuando el disco oscila cerca de la superficie libre, la dependencia de los coeficientes hidrodinámicos es más fuerte por la influencia del movimiento de la superficie libre. 2. Los casos analizados muestran la existencia de un valor crítico de KC, donde la tendencia de los coeficientes hidrodinámicos se ve alterada. Dicho valor crítico depende de la distancia al fondo marino o a la superficie libre. 3. El comportamiento físico del flujo, para valores de KC cercanos a su valor crítico ha sido estudiado mediante el análisis del campo de vorticidad. 4. Introducir porosidad al disco, reduce la masa añadida para los valores de KC estudiados, pero se ha encontrado que la porosidad incrementa el valor del coeficiente de amortiguamiento cuando se incrementa la amplitud del movimiento, logrando un máximo de damping para un disco con 10% de porosidad. 5. Los resultados numéricos y experimentales para los discos con faldón, muestran que usar este tipo de geometrías incrementa la masa añadida cuando se compara con el disco sólido, pero reduce considerablemente el coeficiente de amortiguamiento. 6. Un diseño novedoso de heave plate basado en la teoría fractal ha sido experimentalmente estudiado a diferentes calados y comparado con datos experimentales obtenidos por otro autores. Los resultados muestran un comportamiento incierto de los coeficientes y por tanto este diseño debería ser estudiado más a fondo. ABSTRACT Offshore wind energy is one of the promising resources which can reduce the fossil fuel energy consumption and cover worldwide energy demands. Offshore wind turbine concepts are based on either a fixed structure as a jacket or a floating offshore platform like a semisubmersible, spar or tension leg platform. Floating offshore wind turbines have the potential to be an important part of the energy production profile in the coming years. In order to accomplish this wind integration, these wind turbines need to be made more reliable and cost efficient to be competitive with other sources of energy. Floating offshore artifacts, such oil rings and wind turbines, may experience resonant heave motions in sea states with long peak periods. These heave resonances may increase the system downtime and cause damage on the system components and as well as on risers and mooring systems. The heave resonant response may be reduced by different means: (1) increasing the damping of the system, (2) keeping the natural heave period outside the range of the wave energy, and (3) reducing the heave excitation forces. A typical example to accomplish this reduction are “Heave Plates”. Heave plates are used in the offshore industry due to their hydrodynamic characteristics, i.e., increased added mass and damping. Conventional offshore hydrodynamic analysis considers a structure in waves, and evaluates the linear and nonlinear loads using potential theory. Viscous damping, which is expected to play a crucial role in the resonant response, is an empirical input to the analysis, and is not explicitly calculated. The present research has been mainly focused on the prediction of viscous damping and added mass of floating offshore wind turbine heave plates. In the calculations, the hydrodynamic forces have been measured in order to compute how the hydrodynamic coefficients of added mass1 and damping vary with the KC number, which characterises the amplitude of heave motion relative to the diameter of the disc. In addition, the influence on the hydrodynamic coefficients when the heave plate is oscillating close to the free surface or the seabed has been investigated. In this process, a new model describing the work done by damping in terms of the flow enstrophy, is described herein. This new approach is able to provide a direct correlation between the local vortex shedding processes and the global damping force. The analysis also includes the study of different edges geometry, and examines the sensitivity of the damping and added mass coefficients to the porosity of the plate. A novel porous heave plate based on fractal theory has also been proposed, tested experimentally and compared with experimental data obtained by other authors for plates with similar porosity. A numerical solver of Navier Stokes equations, based on the finite volume technique has been applied. It uses the open-source libraries of OpenFOAM (Open source Field Operation And Manipulation), to solve 2 incompressible, isothermal immiscible fluids using a VOF (volume of fluid) phase-fraction based interface capturing approach, with optional mesh motion and mesh topology changes including adaptive re-meshing. Numerical results have been compared with experiments conducted at Technical University of Madrid (CEHINAV) and CEHIPAR model basins in Madrid and with others performed at School of Mechanical Engineering in The University of Western Australia. A brief summary of main results are presented below: 1. At low KC numbers, a systematic increase in added mass and damping, corresponding to an increase in the seabed proximity, is observed. Specifically, for the cases when the heave plate is oscillating closer to the free surface, the dependence of the hydrodynamic coefficients is strongly influenced by the free surface. 2. As seen in experiments, a critical KC, where the linear trend of the hydrodynamic coefficients with KC is disrupted and that depends on the seabed or free surface distance, has been found. 3. The physical behavior of the flow around the critical KC has been explained through an analysis of the flow vorticity field. 4. The porosity of the heave plates reduces the added mass for the studied porosity at all KC numbers, but the porous heave plates are found to increase the damping coefficient with increasing amplitude of oscillation, achieving a maximum damping coefficient for the heave plate with 10% porosity in the entire KC range. 5. Another concept taken into account in this work has been the heave plates with flaps. Numerical and experimental results show that using discs with flaps will increase added mass when compared to the plain plate but may also significantly reduce damping. 6. A novel heave plate design based on fractal theory has tested experimentally for different submergences and compared with experimental data obtained by other authors for porous plates. Results show an unclear behavior in the coefficients and should be studied further. Future work is necessary in order to address a series of open questions focusing on 3D effects, optimization of the heave plates shapes, etc.
Resumo:
Se cuantifican las descargas subterráneas de un acuífero a un río que lo atraviesa utilizando correlaciones estadísticas. El río Duero, España, incrementa su caudal base en varios m3/s, al atravesar unos afloramientos carbonatados mesozoicos en un pequeño tramo de su cabecera; esto es de especial importancia en época de estiaje, cuando la mayor parte del caudal base del río procede de manantiales que allí se sitúan. Dichos afloramientos corresponden a uno de los dos acuíferos calcáreos confinados, que se desarrollan en paralelo y están hidráulicamente desconectados por una capa impermeable, que forman el sistema acuífero de los manantiales de Gormaz. Este sistema se encuentra en estado de régimen natural y está apenas explotado. Se define el modelo conceptual de funcionamiento hidrogeológico, considerando el papel hidrogeológico de la falla de Gormaz, situada en la zona de descarga del sistema. Analizando información geológica antecedente y la geofísica exploratoria realizada, se obtuvo un mejor conocimiento de la geometría y los límites de los acuíferos, definiéndose un sistema acuífero con una zona de recarga en el sur, correspondiente a los afloramientos calcáreos, los cuales se confinan hacia el norte bajo el Terciario, hasta intersecar con la falla normal de Gormaz. El salto de falla genera una barrera para las formaciones permeables situadas al extremo norte (margen derecha del río Duero); a su vez, el plano de falla facilita el ascenso del agua subterránea del sistema acuífero en estudio y pone en conexión hidráulica los dos acuíferos. Se estimaron, además, los parámetros hidráulicos de los acuíferos en los alrededores de la falla. La buena correlación entre los niveles piezométricos y las descargas subterráneas al río Duero han permitido la reconstrucción del hidrograma de los manantiales de Gormaz en el periodo 1992-2006. Se calcula así que la contribución subterránea al río Duero es de 135.9 hm3/año, que supone el 18.9% de la aportación total del río. In a short stretch of its headwaters, the base flow of the River Duero increases by several m3/s as it traverses some Mesozoic carbonate outcrops. This is of special importance during the dry season, when the majority of the base flow of the river proceeds from springs in this reach. The outcrops correspond to one of two confined calcareous aquifers that developed in parallel but which are not hydraulically connected because of an impermeable layer. Together, they constitute the aquifer system of the Gormaz Springs. The system is still in its natural regime and is hardly exploited. This study defines the conceptual model of hydrogeological functioning, taking into consideration the role of the Gormaz Fault, which is situated in the discharge zone of the system. Analysis of both antecedent geological information and geophysical explorations has led to a better understanding of the geometry and boundaries of the aquifers, defining an aquifer system with a recharge zone in the south corresponding to in the calcareous outcrops. These calcareous outcrops are confined to the north below Tertiary formations, as far as their intersection with the normal fault of Gormaz. The throw of the fault forms the barrier of the permeable formations situated in the extreme north (right bank of the River Duero). In turn, the fault plane facilitates the upflow of groundwater from the aquifer system and creates hydraulic connection between the two aquifers. In addition, the study estimated the hydraulic parameters of the aquifer around the fault. The close correlation between piezometric levels and the groundwater discharges to the River Duero has enabled the reconstruction of the hydrogram of Gormaz springs over the period 1992-2006. By this means, it is calculated that the groundwater contribution to the River Duero is 135.9 hm3/year, or 18.9% of the total river inflow.
Resumo:
A quasi-cylindrical approximation is used to analyse the axisymmetric swirling flow of a liquid with a hollow air core in the chamber of a pressure swirl atomizer. The liquid is injected into the chamber with an azimuthal velocity component through a number of slots at the periphery of one end of the chamber, and flows out as an anular sheet through a central orifice at the other end, following a conical convergence of the chamber wall. An effective inlet condition is used to model the effects of the slots and the boundary layer that develops at the nearby endwall of the chamber. An analysis is presented of the structure of the liquid sheet at the end of the exit orifice, where the flow becomes critical in the sense that upstream propagation of long-wave perturbations ceases to be possible. This nalysis leads to a boundary condition at the end of the orifice that is an extension of the condition of maximum flux used with irrotational models of the flow. As is well known, the radial pressure gradient induced by the swirling flow in the bulk of the chamber causes the overpressure that drives the liquid towards the exit orifice, and also leads to Ekman pumping in the boundary layers of reduced azimuthal velocity at the convergent wall of the chamber and at the wall opposite to the exit orifice. The numerical results confirm the important role played by the boundary layers. They make the thickness of the liquid sheet at the end of the orifice larger than predicted by rrotational models, and at the same time tend to decrease the overpressure required to pass a given flow rate through the chamber, because the large axial velocity in the boundary layers takes care of part of the flow rate. The thickness of the boundary layers increases when the atomizer constant (the inverse of a swirl number, proportional to the flow rate scaled with the radius of the exit orifice and the circulation around the air core) decreases. A minimum value of this parameter is found below which the layer of reduced azimuthal velocity around the air core prevents the pressure from increasing and steadily driving the flow through the exit orifice. The effects of other parameters not accounted for by irrotational models are also analysed in terms of their influence on the boundary layers.
Resumo:
The Chonta Mine (75º00’30” W & 13º04’30”S, 4495 to 5000 m absl), owned by Compañía Minera Caudalosa, operates a polymetallic Zn-Pb-Cu-Ag vein system of the low sulphidation epithermal type, hosted by cenozoic volcanics of dacitic to andesitic composition (Domos de Lava Formation). Veta Rublo, one of the main veins of the system, is worked underground to nearly 300 m. It strikes 60-80º NE and dips 60-70º SE; its width varies between 0.30 and 2.20m, and it crops out along 1 km, but is continued along strike by other veins, as Veta Caudalosa, for some 5 km. Typical metal contents are 7% Zn, 5% Pb, 0.4% Cu and 3 oz/t Ag, with quartz, sericite, sphalerite, galena, pyrite, chalcopyrite, fahlore as main minerals, and minor carbonate and sulphosalts.
Resumo:
The Esperanza Zn-Pb-Ag vein, owned by Compañía de Minas Buenaventura S.A.A., lies over 4000 to 4650 masl in the Western Cordillera of the Peruvian Central Andes. The Esperanza low sulphidation epithermal vein trends ~E-W along 1500 m; it dips to the South and can be followed to 350 m depth. As other veins of the district, like Teresita and Bienaventurada, it is hosted by intermediate to felsic volcanics (andesitic to dacitic compositions) of the Huachocolpa Group (Middle Miocene to Upper Pliocene). The mineralisation occurs mostly as open space filling related to fracture development during the Quechua III deformational event. Main ore minerals are sphalerite, galena, tetrahedrite, pyrite, chalcopyrite and Ag and Pb sulfosalts; quartz, barite and calcite are the main gangue minerals. Current production grades are ~5% Zn, ~8Oz/t Ag, ~3% Pb; usually very low Cu (mean ~0.04%).
Resumo:
Ontologies and taxonomies are widely used to organize concepts providing the basis for activities such as indexing, and as background knowledge for NLP tasks. As such, translation of these resources would prove useful to adapt these systems to new languages. However, we show that the nature of these resources is significantly different from the "free-text" paradigm used to train most statistical machine translation systems. In particular, we see significant differences in the linguistic nature of these resources and such resources have rich additional semantics. We demonstrate that as a result of these linguistic differences, standard SMT methods, in particular evaluation metrics, can produce poor performance. We then look to the task of leveraging these semantics for translation, which we approach in three ways: by adapting the translation system to the domain of the resource; by examining if semantics can help to predict the syntactic structure used in translation; and by evaluating if we can use existing translated taxonomies to disambiguate translations. We present some early results from these experiments, which shed light on the degree of success we may have with each approach
Resumo:
Static analyses of object-oriented programs usually rely on intermediate representations that respect the original semantics while having a more uniform and basic syntax. Most of the work involving object-oriented languages and abstract interpretation usually omits the description of that language or just refers to the Control Flow Graph(CFG) it represents. However, this lack of formalization on one hand results in an absence of assurances regarding the correctness of the transformation and on the other it typically strongly couples the analysis to the source language. In this work we present a framework for analysis of object-oriented languages in which in a first phase we transform the input program into a representation based on Horn clauses. This allows on one hand proving the transformation correct attending to a simple condition and on the other being able to apply an existing analyzer for (constraint) logic programming to automatically derive a safe approximation of the semantics of the original program. The approach is flexible in the sense that the first phase decouples the analyzer from most languagedependent features, and correct because the set of Horn clauses returned by the transformation phase safely approximates the standard semantics of the input program. The resulting analysis is also reasonably scalable due to the use of mature, modular (C)LP-based analyzers. The overall approach allows us to report results for medium-sized programs.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
The design and development of spoken interaction systems has been a thoroughly studied research scope for the last decades. The aim is to obtain systems with the ability to interact with human agents with a high degree of naturalness and efficiency, allowing them to carry out the actions they desire using speech, as it is the most natural means of communication between humans. To achieve that degree of naturalness, it is not enough to endow systems with the ability to accurately understand the user’s utterances and to properly react to them, even considering the information provided by the user in his or her previous interactions. The system has also to be aware of the evolution of the conditions under which the interaction takes place, in order to act the most coherent way as possible at each moment. Consequently, one of the most important features of the system is that it has to be context-aware. This context awareness of the system can be reflected in the modification of the behaviour of the system taking into account the current situation of the interaction. For instance, the system should decide which action it has to carry out, or the way to perform it, depending on the user that requests it, on the way that the user addresses the system, on the characteristics of the environment in which the interaction takes place, and so on. In other words, the system has to adapt its behaviour to these evolving elements of the interaction. Moreover that adaptation has to be carried out, if possible, in such a way that the user: i) does not perceive that the system has to make any additional effort, or to devote interaction time to perform tasks other than carrying out the requested actions, and ii) does not have to provide the system with any additional information to carry out the adaptation, which could imply a lesser efficiency of the interaction, since users should devote several interactions only to allow the system to become adapted. In the state-of-the-art spoken dialogue systems, researchers have proposed several disparate strategies to adapt the elements of the system to different conditions of the interaction (such as the acoustic characteristics of a specific user’s speech, the actions previously requested, and so on). Nevertheless, to our knowledge there is not any consensus on the procedures to carry out these adaptation. The approaches are to an extent unrelated from one another, in the sense that each one considers different pieces of information, and the treatment of that information is different taking into account the adaptation carried out. In this regard, the main contributions of this Thesis are the following ones: Definition of a contextualization framework. We propose a unified approach that can cover any strategy to adapt the behaviour of a dialogue system to the conditions of the interaction (i.e. the context). In our theoretical definition of the contextualization framework we consider the system’s context as all the sources of variability present at any time of the interaction, either those ones related to the environment in which the interaction takes place, or to the human agent that addresses the system at each moment. Our proposal relies on three aspects that any contextualization approach should fulfill: plasticity (i.e. the system has to be able to modify its behaviour in the most proactive way taking into account the conditions under which the interaction takes place), adaptivity (i.e. the system has also to be able to consider the most appropriate sources of information at each moment, both environmental and user- and dialogue-dependent, to effectively adapt to the conditions aforementioned), and transparency (i.e. the system has to carry out the contextualizaton-related tasks in such a way that the user neither perceives them nor has to do any effort in providing the system with any information that it needs to perform that contextualization). Additionally, we could include a generality aspect to our proposed framework: the main features of the framework should be easy to adopt in any dialogue system, regardless of the solution proposed to manage the dialogue. Once we define the theoretical basis of our contextualization framework, we propose two cases of study on its application in a spoken dialogue system. We focus on two aspects of the interaction: the contextualization of the speech recognition models, and the incorporation of user-specific information into the dialogue flow. One of the modules of a dialogue system that is more prone to be contextualized is the speech recognition system. This module makes use of several models to emit a recognition hypothesis from the user’s speech signal. Generally speaking, a recognition system considers two types of models: an acoustic one (that models each of the phonemes that the recognition system has to consider) and a linguistic one (that models the sequences of words that make sense for the system). In this work we contextualize the language model of the recognition system in such a way that it takes into account the information provided by the user in both his or her current utterance and in the previous ones. These utterances convey information useful to help the system in the recognition of the next utterance. The contextualization approach that we propose consists of a dynamic adaptation of the language model that is used by the recognition system. We carry out this adaptation by means of a linear interpolation between several models. Instead of training the best interpolation weights, we make them dependent on the conditions of the dialogue. In our approach, the system itself will obtain these weights as a function of the reliability of the different elements of information available, such as the semantic concepts extracted from the user’s utterance, the actions that he or she wants to carry out, the information provided in the previous interactions, and so on. One of the aspects more frequently addressed in Human-Computer Interaction research is the inclusion of user specific characteristics into the information structures managed by the system. The idea is to take into account the features that make each user different from the others in order to offer to each particular user different services (or the same service, but in a different way). We could consider this approach as a user-dependent contextualization of the system. In our work we propose the definition of a user model that contains all the information of each user that could be potentially useful to the system at a given moment of the interaction. In particular we will analyze the actions that each user carries out throughout his or her interaction. The objective is to determine which of these actions become the preferences of that user. We represent the specific information of each user as a feature vector. Each of the characteristics that the system will take into account has a confidence score associated. With these elements, we propose a probabilistic definition of a user preference, as the action whose likelihood of being addressed by the user is greater than the one for the rest of actions. To include the user dependent information into the dialogue flow, we modify the information structures on which the dialogue manager relies to retrieve information that could be needed to solve the actions addressed by the user. Usage preferences become another source of contextual information that will be considered by the system towards a more efficient interaction (since the new information source will help to decrease the need of the system to ask users for additional information, thus reducing the number of turns needed to carry out a specific action). To test the benefits of the contextualization framework that we propose, we carry out an evaluation of the two strategies aforementioned. We gather several performance metrics, both objective and subjective, that allow us to compare the improvements of a contextualized system against the baseline one. We will also gather the user’s opinions as regards their perceptions on the behaviour of the system, and its degree of adaptation to the specific features of each interaction. Resumen El diseño y el desarrollo de sistemas de interacción hablada ha sido objeto de profundo estudio durante las pasadas décadas. El propósito es la consecución de sistemas con la capacidad de interactuar con agentes humanos con un alto grado de eficiencia y naturalidad. De esta manera, los usuarios pueden desempeñar las tareas que deseen empleando la voz, que es el medio de comunicación más natural para los humanos. A fin de alcanzar el grado de naturalidad deseado, no basta con dotar a los sistemas de la abilidad de comprender las intervenciones de los usuarios y reaccionar a ellas de manera apropiada (teniendo en consideración, incluso, la información proporcionada en previas interacciones). Adicionalmente, el sistema ha de ser consciente de las condiciones bajo las cuales transcurre la interacción, así como de la evolución de las mismas, de tal manera que pueda actuar de la manera más coherente en cada instante de la interacción. En consecuencia, una de las características primordiales del sistema es que debe ser sensible al contexto. Esta capacidad del sistema de conocer y emplear el contexto de la interacción puede verse reflejada en la modificación de su comportamiento debida a las características actuales de la interacción. Por ejemplo, el sistema debería decidir cuál es la acción más apropiada, o la mejor manera de llevarla a término, dependiendo del usuario que la solicita, del modo en el que lo hace, etcétera. En otras palabras, el sistema ha de adaptar su comportamiento a tales elementos mutables (o dinámicos) de la interacción. Dos características adicionales son requeridas a dicha adaptación: i) el usuario no ha de percibir que el sistema dedica recursos (temporales o computacionales) a realizar tareas distintas a las que aquél le solicita, y ii) el usuario no ha de dedicar esfuerzo alguno a proporcionar al sistema información adicional para llevar a cabo la interacción. Esto último implicaría una menor eficiencia de la interacción, puesto que los usuarios deberían dedicar parte de la misma a proporcionar información al sistema para su adaptación, sin ningún beneficio inmediato. En los sistemas de diálogo hablado propuestos en la literatura, se han propuesto diferentes estrategias para llevar a cabo la adaptación de los elementos del sistema a las diferentes condiciones de la interacción (tales como las características acústicas del habla de un usuario particular, o a las acciones a las que se ha referido con anterioridad). Sin embargo, no existe una estrategia fija para proceder a dicha adaptación, sino que las mismas no suelen guardar una relación entre sí. En este sentido, cada una de ellas tiene en cuenta distintas fuentes de información, la cual es tratada de manera diferente en función de las características de la adaptación buscada. Teniendo en cuenta lo anterior, las contribuciones principales de esta Tesis son las siguientes: Definición de un marco de contextualización. Proponemos un criterio unificador que pueda cubrir cualquier estrategia de adaptación del comportamiento de un sistema de diálogo a las condiciones de la interacción (esto es, el contexto de la misma). En nuestra definición teórica del marco de contextualización consideramos el contexto del sistema como todas aquellas fuentes de variabilidad presentes en cualquier instante de la interacción, ya estén relacionadas con el entorno en el que tiene lugar la interacción, ya dependan del agente humano que se dirige al sistema en cada momento. Nuestra propuesta se basa en tres aspectos que cualquier estrategia de contextualización debería cumplir: plasticidad (es decir, el sistema ha de ser capaz de modificar su comportamiento de la manera más proactiva posible, teniendo en cuenta las condiciones en las que tiene lugar la interacción), adaptabilidad (esto es, el sistema ha de ser capaz de considerar la información oportuna en cada instante, ya dependa del entorno o del usuario, de tal manera que adecúe su comportamiento de manera eficaz a las condiciones mencionadas), y transparencia (que implica que el sistema ha de desarrollar las tareas relacionadas con la contextualización de tal manera que el usuario no perciba la manera en que dichas tareas se llevan a cabo, ni tampoco deba proporcionar al sistema con información adicional alguna). De manera adicional, incluiremos en el marco propuesto el aspecto de la generalidad: las características del marco de contextualización han de ser portables a cualquier sistema de diálogo, con independencia de la solución propuesta en los mismos para gestionar el diálogo. Una vez hemos definido las características de alto nivel de nuestro marco de contextualización, proponemos dos estrategias de aplicación del mismo a un sistema de diálogo hablado. Nos centraremos en dos aspectos de la interacción a adaptar: los modelos empleados en el reconocimiento de habla, y la incorporación de información específica de cada usuario en el flujo de diálogo. Uno de los módulos de un sistema de diálogo más susceptible de ser contextualizado es el sistema de reconocimiento de habla. Este módulo hace uso de varios modelos para generar una hipótesis de reconocimiento a partir de la señal de habla. En general, un sistema de reconocimiento emplea dos tipos de modelos: uno acústico (que modela cada uno de los fonemas considerados por el reconocedor) y uno lingüístico (que modela las secuencias de palabras que tienen sentido desde el punto de vista de la interacción). En este trabajo contextualizamos el modelo lingüístico del reconocedor de habla, de tal manera que tenga en cuenta la información proporcionada por el usuario, tanto en su intervención actual como en las previas. Estas intervenciones contienen información (semántica y/o discursiva) que puede contribuir a un mejor reconocimiento de las subsiguientes intervenciones del usuario. La estrategia de contextualización propuesta consiste en una adaptación dinámica del modelo de lenguaje empleado en el reconocedor de habla. Dicha adaptación se lleva a cabo mediante una interpolación lineal entre diferentes modelos. En lugar de entrenar los mejores pesos de interpolación, proponemos hacer los mismos dependientes de las condiciones actuales de cada diálogo. El propio sistema obtendrá estos pesos como función de la disponibilidad y relevancia de las diferentes fuentes de información disponibles, tales como los conceptos semánticos extraídos a partir de la intervención del usuario, o las acciones que el mismo desea ejecutar. Uno de los aspectos más comúnmente analizados en la investigación de la Interacción Persona-Máquina es la inclusión de las características específicas de cada usuario en las estructuras de información empleadas por el sistema. El objetivo es tener en cuenta los aspectos que diferencian a cada usuario, de tal manera que el sistema pueda ofrecer a cada uno de ellos el servicio más apropiado (o un mismo servicio, pero de la manera más adecuada a cada usuario). Podemos considerar esta estrategia como una contextualización dependiente del usuario. En este trabajo proponemos la definición de un modelo de usuario que contenga toda la información relativa a cada usuario, que pueda ser potencialmente utilizada por el sistema en un momento determinado de la interacción. En particular, analizaremos aquellas acciones que cada usuario decide ejecutar a lo largo de sus diálogos con el sistema. Nuestro objetivo es determinar cuáles de dichas acciones se convierten en las preferencias de cada usuario. La información de cada usuario quedará representada mediante un vector de características, cada una de las cuales tendrá asociado un valor de confianza. Con ambos elementos proponemos una definición probabilística de una preferencia de uso, como aquella acción cuya verosimilitud es mayor que la del resto de acciones solicitadas por el usuario. A fin de incluir la información dependiente de usuario en el flujo de diálogo, llevamos a cabo una modificación de las estructuras de información en las que se apoya el gestor de diálogo para recuperar información necesaria para resolver ciertos diálogos. En dicha modificación las preferencias de cada usuario pasarán a ser una fuente adicional de información contextual, que será tenida en cuenta por el sistema en aras de una interacción más eficiente (puesto que la nueva fuente de información contribuirá a reducir la necesidad del sistema de solicitar al usuario información adicional, dando lugar en consecuencia a una reducción del número de intervenciones necesarias para llevar a cabo una acción determinada). Para determinar los beneficios de las aplicaciones del marco de contextualización propuesto, llevamos a cabo una evaluación de un sistema de diálogo que incluye las estrategias mencionadas. Hemos recogido diversas métricas, tanto objetivas como subjetivas, que nos permiten determinar las mejoras aportadas por un sistema contextualizado en comparación con el sistema sin contextualizar. De igual manera, hemos recogido las opiniones de los participantes en la evaluación acerca de su percepción del comportamiento del sistema, y de su capacidad de adaptación a las condiciones concretas de cada interacción.
Resumo:
Current development platforms for designing spoken dialog services feature different kinds of strategies to help designers build, test, and deploy their applications. In general, these platforms are made up of several assistants that handle the different design stages (e.g. definition of the dialog flow, prompt and grammar definition, database connection, or to debug and test the running of the application). In spite of all the advances in this area, in general the process of designing spoken-based dialog services is a time consuming task that needs to be accelerated. In this paper we describe a complete development platform that reduces the design time by using different types of acceleration strategies based on using information from the data model structure and database contents, as well as cumulative information obtained throughout the successive steps in the design. Thanks to these accelerations, the interaction with the platform is simplified and the design is reduced, in most cases, to simple confirmations to the “proposals” that the platform automatically provides at each stage. Different kinds of proposals are available to complete the application flow such as the possibility of selecting which information slots should be requested to the user together, predefined templates for common dialogs, the most probable actions that make up each state defined in the flow, different solutions to solve specific speech-modality problems such as the presentation of the lists of retrieved results after querying the backend database. The platform also includes accelerations for creating speech grammars and prompts, and the SQL queries for accessing the database at runtime. Finally, we will describe the setup and results obtained in a simultaneous summative, subjective and objective evaluations with different designers used to test the usability of the proposed accelerations as well as their contribution to reducing the design time and interaction.
Resumo:
As part of their development, the predictions of numerical wind flow models must be compared with measurements in order to estimate the uncertainty related to their use. Of course, the most rigorous such comparison is under blind conditions. The following paper includes a detailed description of three different wind flow models, all based on a Reynolds-averaged Navier-Stokes approach and two-equation k-ε closure, that were tested as part of the Bolund blind comparison (itself based on the Bolund experiment which measured the wind around a small coastal island). The models are evaluated in terms of predicted normalized wind speed and turbulent kinetic energy at 2 m and 5 m above ground level for a westerly wind direction. Results show that all models predict the mean velocity reasonably well; however accurate prediction of the turbulent kinetic energy remains achallenge.
Resumo:
This paper presents a new verification procedure for sound source coverage according to ISO 140?5 requirements. The ISO 140?5 standard applies to the measurement of façade insulation and requires a sound source able to achieve a sufficiently uniform sound field in free field conditions on the façade under study. The proposed method involves the electroacoustic characterisation of the sound source in laboratory free field conditions (anechoic room) and the subsequent prediction by computer simulation of the sound free field radiated on a rectangular surface equal in size to the façade being measured. The loudspeaker is characterised in an anechoic room under laboratory controlled conditions, carefully measuring directivity, and then a computer model is designed to calculate the acoustic free field coverage for different loudspeaker positions and façade sizes. For each sound source position, the method provides the maximum direct acoustic level differences on a façade specimen and therefore determines whether the loudspeaker verifies the maximum allowed level difference of 5 dB (or 10 dB for façade dimensions greater than 5 m) required by the ISO standard. Additionally, the maximum horizontal dimension of the façade meeting the standard is calculated and provided for each sound source position, both with the 5 dB and 10 dB criteria. In the last section of the paper, the proposed procedure is compared with another method used by the authors in the past to achieve the same purpose: in situ outdoor measurements attempting to recreate free field conditions. From this comparison, it is concluded that the proposed method is able to reproduce the actual measurements with high accuracy, for example, the ground reflection effect, at least at low frequencies, which is difficult to avoid in the outdoor measurement method, and it is fully eliminated with the proposed method to achieve the free field requisite.
Resumo:
An ED-tether mission to Jupiter is presented. A bare tether carrying cathodic devices at both ends but no power supply, and using no propellant, could move 'freely' among Jupiter's 4 great moons. The tour scheme would have current naturally driven throughout by the motional electric field, the Lorentz force switching direction with current around a 'drag' radius of 160,00 kms, where the speed of the jovian ionosphere equals the speed of a spacecraft in circular orbit. With plasma density and magnetic field decreasing rapidly with distance from Jupiter, drag/thrust would only be operated in the inner plasmasphere, current being near shut off conveniently in orbit by disconnecting cathodes or plugging in a very large resistance; the tether could serve as its own power supply by plugging in an electric load where convenient, with just some reduction in thrust or drag. The periapsis of the spacecraft in a heliocentric transfer orbit from Earth would lie inside the drag sphere; with tether deployed and current on around periapsis, magnetic drag allows Jupiter to capture the spacecraft into an elliptic orbit of high eccentricity. Current would be on at succesive perijove passes and off elsewhere, reducing the eccentricity by lowering the apoapsis progressively to allow visits of the giant moons. In a second phase, current is on around apoapsis outside the drag sphere, rising the periapsis until the full orbit lies outside that sphere. In a third phase, current is on at periapsis, increasing the eccentricity until a last push makes the orbit hyperbolic to escape Jupiter. Dynamical issues such as low gravity-gradient at Jupiter and tether orientation in elliptic orbits of high eccentricity are discussed.
Resumo:
The efficiency of a Power Plant is affected by the distribution of the pulverized coal within the furnace. The coal, which is pulverized in the mills, is transported and distributed by the primary gas through the mill-ducts to the interior of the furnace. This is done with a double function: dry and enter the coal by different levels for optimizing the combustion in the sense that a complete combustion occurs with homogeneous heat fluxes to the walls. The mill-duct systems of a real Power Plant are very complex and they are not yet well understood. In particular, experimental data concerning the mass flows of coal to the different levels are very difficult to measure. CFD modeling can help to determine them. An Eulerian/Lagrangian approach is used due to the low solid–gas volume ratio.
Resumo:
An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values of dust re-suspended during a LOVA.