22 resultados para Error Correction Models
em Universidad Politécnica de Madrid
Resumo:
This paper presents an alternative Forward Error Correction scheme, based on Reed-Solomon codes, with the aim of protecting the transmission of RTP-multimedia streams: the inter-packet symbol approach. This scheme is based on an alternative bit structure that allocates each symbol of the Reed-Solomon code in several RTP-media packets. This characteristic permits to exploit better the recovery capability of Reed-Solomon codes against bursty packet losses. The performance of our approach has been studied in terms of encoding/decoding time versus recovery capability, and compared with other proposed schemes in the literature. The theoretical analysis has shown that our approach allows the use of a lower size of the Galois Fields compared to other solutions. This lower size results in a decrease of the required encoding/decoding time while keeping a comparable recovery capability. Finally, experimental results have been carried out to assess the performance of our approach compared to other schemes in a simulated environment, where models for wireless and wireline channels have been considered.
Resumo:
Vivimos una época en la que el mundo se transforma aceleradamente. La globalización está siguiendo un curso imparable, la población mundial así como la población urbana siguen creciendo, y en los países emergentes los ingresos promedios aumentan, resultando en un cambio también acelerado de las dietas y hábitos alimentarios. En conjunto esos factores están causando un aumento fundamental de la demanda de alimentos. Junto con la apertura de los mercados agrícolas, estos procesos han provocado un crecimiento del comercio internacional de alimentos durante la última década. Dado que muchos países de América Latina están dotados de abundancia de recursos naturales, estas tendencias han producido un crecimiento rápido de las exportaciones de bienes primarios desde América Latina al resto del mundo. En sólo 30 años la participación en el mercado agrícola de América Latina casi se ha duplicado, desde 10% en 1980 a 18% en 2010. Este aumento del comercio agrícola ha dado lugar a un debate sobre una serie de cuestiones cruciales relacionadas con los impactos del comercio en la seguridad alimentaria mundial, en el medio ambiente o en la reducción de la pobreza rural en países en desarrollo. Esta tesis aplica un marco integrado para analizar varios impactos relacionados con la transformación de los mercados agrícolas y los mercados rurales debidos a la globalización y, en particular, al progresivo aumento del comercio internacional. En concreto, la tesis aborda los siguientes temas: En primer lugar, la producción mundial de alimentos tendrá que aumentar considerablemente para poder satisfacer la demanda de una población mundial de 9000 millones personas en 2050, lo cual plantea grandes desafíos sobre los sistemas de la producción de alimentos. Alcanzar este logro, sin comprometer la integridad del medio ambiente en regiones exportadoras, es un reto aún mayor. En este contexto, la tesis analiza los efectos de la liberalización del comercio mundial, considerando distintas tecnologías de producción agraria, sobre unos indicadores de seguridad alimentaria en diferentes regiones del mundo y sobre distintos indicadores ambientales, teniendo en cuenta escalas diferentes en América Latina y el Caribe. La tesis utiliza el modelo “International Model for Policy Analysis of Agricultural Commodities and Trade (IMPACT)” – un modelo dinámico de equilibrio parcial del sector agrícola a escala global – para modelar la apertura de los mercados agrícolas así como diferentes escenarios de la producción hasta el año 2050. Los resultados del modelo están vinculados a modelos biofísicos para poder evaluar los cambios en la huella hídrica y la calidad del agua, así como para cuantificar los impactos del cambio en el uso del suelo sobre la biodiversidad y los stocks de carbono en 2050. Los resultados indican que la apertura de los mercados agrícolas es muy importante para mejorar la seguridad alimentaria a nivel mundial, sin embargo, produce también presiones ambientales indeseables en algunas regiones de América Latina. Contrastando dos escenarios que consideran distintas modos de producción, la expansión de la tierra agrícola frente a un escenario de la producción más intensiva, se demuestra que las mejoras de productividad son generalmente superiores a la expansión de las tierras agrícolas, desde un punto de vista económico e ambiental. En cambio, los escenarios de intensificación sostenible no sólo hacen posible una mayor producción de alimentos, sino que también generan menos impactos medioambientales que los otros escenarios futuros en todas sus dimensiones: biodiversidad, carbono, emisiones de nitratos y uso del agua. El análisis muestra que hay un “trade-off” entre el objetivo de alcanzar la sostenibilidad ambiental y el objetivo de la seguridad alimentaria, independiente del manejo agrícola en el futuro. En segundo lugar, a la luz de la reciente crisis de los precios de alimentos en los años 2007/08, la tesis analiza los impactos de la apertura de los mercados agrícolas en la transmisión de precios de los alimentos en seis países de América Latina: Argentina, Brasil, Chile, Colombia, México y el Perú. Para identificar las posibles relaciones de cointegración entre los índices de precios al consumidor de alimentos y los índices de precios de agrarios internacionales, sujetos a diferentes grados de apertura de mercados agrícolas en los seis países de América Latina, se utiliza un modelo simple de corrección de error (single equation error correction). Los resultados indican que la integración global de los mercados agrícolas ha dado lugar a diferentes tasas de transmisión de precios en los países investigados. Sobre todo en el corto plazo, las tasas de transmisión dependen del grado de apertura comercial, mientras que en el largo plazo las tasas de transmisión son elevadas, pero en gran medida independientes del régimen de comercio. Por lo tanto, durante un período de shocks de precios mundiales una mayor apertura del comercio trae consigo más inestabilidad de los precios domésticos a corto plazo y la resultante persistencia en el largo plazo. Sin embargo, estos resultados no verifican necesariamente la utilidad de las políticas comerciales, aplicadas frecuentemente por los gobiernos para amortiguar los shocks de precios. Primero, porque existe un riesgo considerable de volatilidad de los precios debido a cambios bruscos de la oferta nacional si se promueve la autosuficiencia en el país; y segundo, la política de proteccionismo asume el riesgo de excluir el país de participar en las cadenas de suministro de alto valor del sector agrícola, y por lo tanto esa política podría obstaculizar el desarrollo económico. Sin embargo, es indispensable establecer políticas efectivas para reducir la vulnerabilidad de los hogares a los aumentos repentinos de precios de alimentos, lo cual requiere una planificación gubernamental precisa con el presupuesto requerido disponible. En tercer lugar, la globalización afecta a la estructura de una economía y, por medios distintos, la distribución de los ingreso en un país. Perú sirve como ejemplo para investigar más profundamente las cuestiones relacionadas con los cambios en la distribución de los ingresos en zonas rurales. Perú, que es un país que está cada vez más integrado en los mercados mundiales, consiguió importantes descensos en la pobreza extrema en sus zonas rurales, pero a la vez adolece de alta incidencia de pobreza moderada y de desigualdad de los ingresos en zonas rural al menos durante el periodo comprendido entre 2004 y 2012. Esta parte de la tesis tiene como objetivo identificar las fuerzas impulsoras detrás de estas dinámicas en el Perú mediante el uso de un modelo de microsimulación basado en modelos de generación de ingresos aplicado a nivel los hogares rurales. Los resultados indican que la fuerza principal detrás de la reducción de la pobreza ha sido el crecimiento económico general de la economía, debido a las condiciones macroeconómicas favorables durante el periodo de estudio. Estos efectos de crecimiento beneficiaron a casi todos los sectores rurales, y dieron lugar a la disminución de la pobreza rural extrema, especialmente entre los agricultores de papas y de maíz. En parte, estos agricultores probablemente se beneficiaron de la apertura de los mercados agrícolas, que es lo que podría haber provocado un aumento de los precios al productor en tiempos de altos precios mundiales de los alimentos. Sin embargo, los resultados también sugieren que para una gran parte de la población más pobre existían barreras de entrada a la hora de poder participar en el empleo asalariado fuera de la agricultura o en la producción de cultivos de alto valor. Esto podría explicarse por la falta de acceso a unos activos importantes: por ejemplo, el nivel de educación de los pobres era apenas mejor en 2012 que en 2004; y también las dotaciones de tierra y de mano de obra, sobre todo de los productores pobres de maíz y patata, disminuyeron entre 2004 y 2012. Esto lleva a la conclusión de que aún hay margen para aplicar políticas para facilitar el acceso a estos activos, que podría contribuir a la erradicación de la pobreza rural. La tesis concluye que el comercio agrícola puede ser un importante medio para abastecer una población mundial creciente y más rica con una cantidad suficiente de calorías. Para evitar adversos efectos ambientales e impactos negativos para los consumidores y de los productores pobres, el enfoque debe centrarse en las mejoras de la productividad agrícola, teniendo en cuenta los límites ambientales y ser socialmente inclusivo. En este sentido, será indispensable seguir desarrollando soluciones tecnológicas que garanticen prácticas de producción agrícola minimizando el uso de recursos naturales. Además, para los pequeños pobres agricultores será fundamental eliminar las barreras de entrada a los mercados de exportación que podría tener efectos indirectos favorables a través de la adopción de nuevas tecnologías alcanzables a través de mercados internacionales. ABSTRACT The world is in a state of rapid transition. Ongoing globalization, population growth, rising living standards and increasing urbanization, accompanied by changing dietary patterns throughout the world, are increasing the demand for food. Together with more open trade regimes, this has triggered growing international agricultural trade during the last decade. For many Latin American countries, which are gifted with relative natural resource abundance, these trends have fueled rapid export growth of primary goods. In just 30 years, the Latin American agricultural market share has almost doubled from 10% in 1980 to 18% in 2010. These market developments have given rise to a debate around a number of crucial issues related to the role of agricultural trade for global food security, for the environment or for poverty reduction in developing countries. This thesis uses an integrated framework to analyze a broad array of possible impacts related to transforming agricultural and rural markets in light of globalization, and in particular of increasing trade activity. Specifically, the following issues are approached: First, global food production will have to rise substantially by the year 2050 to meet effective demand of a nine billion people world population which poses major challenges to food production systems. Doing so without compromising environmental integrity in exporting regions is an even greater challenge. In this context, the thesis explores the effects of future global trade liberalization on food security indicators in different world regions and on a variety of environmental indicators at different scales in Latin America and the Caribbean, in due consideration of different future agricultural production practices. The International Model for Policy Analysis of Agricultural Commodities and Trade (IMPACT) –a global dynamic partial equilibrium model of the agricultural sector developed by the International Food Policy Research Institute (IFPRI)– is applied to run different future production scenarios, and agricultural trade regimes out to 2050. Model results are linked to biophysical models, used to assess changes in water footprints and water quality, as well as impacts on biodiversity and carbon stocks from land use change by 2050. Results indicate that further trade liberalization is crucial for improving food security globally, but that it would also lead to more environmental pressures in some regions across Latin America. Contrasting land expansion versus more intensified agriculture shows that productivity improvements are generally superior to agricultural land expansion, from an economic and environmental point of view. Most promising for achieving food security and environmental goals, in equal measure, is the sustainable intensification scenario. However, the analysis shows that there are trade-offs between environmental and food security goals for all agricultural development paths. Second, in light of the recent food price crisis of 2007/08, the thesis looks at the impacts of increasing agricultural market integration on food price transmission from global to domestic markets in six Latin American countries, namely Argentina, Brazil, Chile, Colombia, Mexico and Peru. To identify possible cointegrating relationships between the domestic food consumer price indices and world food price levels, subject to different degrees of agricultural market integration in the six Latin American countries, a single equation error correction model is used. Results suggest that global agricultural market integration has led to different levels of price path-through in the studied countries. Especially in the short-run, transmission rates depend on the degree of trade openness, while in the long-run transmission rates are high, but largely independent of the country-specific trade regime. Hence, under world price shocks more trade openness brings with it more price instability in the short-term and the resulting persistence in the long-term. However, these findings do not necessarily verify the usefulness of trade policies, often applied by governments to buffer such price shocks. First, because there is a considerable risk of price volatility due to domestic supply shocks if self-sufficiency is promoted. Second, protectionism bears the risk of excluding a country from participating in beneficial high-value agricultural supply chains, thereby hampering economic development. Nevertheless, to reduce households’ vulnerability to sudden and large increases of food prices, effective policies to buffer food price shocks should be put in place, but must be carefully planned with the required budget readily available. Third, globalization affects the structure of an economy and, by different means, the distribution of income in a country. Peru serves as an example to dive deeper into questions related to changes in the income distribution in rural areas. Peru, a country being increasingly integrated into global food markets, experienced large drops in extreme rural poverty, but persistently high rates of moderate rural poverty and rural income inequality between 2004 and 2012. The thesis aims at disentangling the driving forces behind these dynamics by using a microsimulation model based on rural household income generation models. Results provide evidence that the main force behind poverty reduction was overall economic growth of the economy due to generally favorable macroeconomic market conditions. These growth effects benefited almost all rural sectors, and led to declines in extreme rural poverty, especially among potato and maize farmers. In part, these farmers probably benefited from policy changes towards more open trade regimes and the resulting higher producer prices in times of elevated global food price levels. However, the results also suggest that entry barriers existed for the poorer part of the population to participate in well-paid wage-employment outside of agriculture or in high-value crop production. This could be explained by a lack of sufficient access to important rural assets. For example, poor people’s educational attainment was hardly better in 2012 than in 2004. Also land and labor endowments, especially of (poor) maize and potato growers, rather decreased than increased over time. This leads to the conclusion that there is still scope for policy action to facilitate access to these assets, which could contribute to the eradication of rural poverty. The thesis concludes that agricultural trade can be one important means to provide a growing and richer world population with sufficient amounts of calories. To avoid adverse environmental effects and negative impacts for poor food consumers and producers, the focus should lie on agricultural productivity improvements, considering environmental limits and be socially inclusive. In this sense, it will be crucial to further develop technological solutions that guarantee resource-sparing agricultural production practices, and to remove entry barriers for small poor farmers to export markets which might allow for technological spill-over effects from high-value global agricultural supply chains.
Resumo:
A video-aware unequal loss protection (ULP) system for protecting RTP video streaming in bursty packet loss networks is proposed. Just considering the relevance of the frame, the state of the channel and the bitrate constraints of the protection bitstream, our algorithm selects in real time the most suitable frames to be protected through forward error correction (FEC) techniques. It benefits from a wise RTP encapsulation that allows working at a frame level without requiring any further process than that of parsing RTP headers, so it is perfectly suitable to be included in commercial transmitters. The simulation results show how our proposed ULP technique outperforms non-smart schemes.
Resumo:
Este trabajo presenta un estudio sobre el funcionamiento y aplicaciones de las células de combustible de membrana tipo PEM, o de intercambio de protones, alimentadas con hidrógeno puro y oxigeno obtenido de aire comprimido. Una vez evaluado el proceso de dichas células y las variables que intervienen en el mismo, como presión, humedad y temperatura, se presenta una variedad de métodos para la instrumentación de tales variables así como métodos y sistemas para la estabilidad y control de las mismas, en torno a los valores óptimos para una mayor eficacia en el proceso. Tomando como variable principal a controlar la temperatura del proceso, y exponiendo los valores concretos en torno a 80 grados centígrados entre los que debe situarse, es realizado un modelo del proceso de calentamiento y evolución de la temperatura en función de la potencia del calentador resistivo en el dominio de la frecuencia compleja, y a su vez implementado un sistema de medición mediante sensores termopar de tipo K de respuesta casi lineal. La señal medida por los sensores es amplificada de manera diferencial mediante amplificadores de instrumentación INA2126, y es desarrollado un algoritmo de corrección de error de unión fría (error producido por la inclusión de nuevos metales del conector en el efecto termopar). Son incluidos los datos de test referentes al sistema de medición de temperatura , incluyendo las desviaciones o error respecto a los valores ideales de medida. Para la adquisición de datos y implementación de algoritmos de control, es utilizado un PC con el software Labview de National Instruments, que permite una programación intuitiva, versátil y visual, y poder realizar interfaces de usuario gráficas simples. La conexión entre el hardware de instrumentación y control de la célula y el PC se realiza mediante un interface de adquisición de datos USB NI 6800 que cuenta con un amplio número de salidas y entradas analógicas. Una vez digitalizadas las muestras de la señal medida, y corregido el error de unión fría anteriormente apuntado, es implementado en dicho software un controlador de tipo PID ( proporcional-integral-derivativo) , que se presenta como uno de los métodos más adecuados por su simplicidad de programación y su eficacia para el control de este tipo de variables. Para la evaluación del comportamiento del sistema son expuestas simulaciones mediante el software Matlab y Simulink determinando por tanto las mejores estrategias para desarrollar el control PID, así como los posibles resultados del proceso. En cuanto al sistema de calentamiento de los fluidos, es empleado un elemento resistor calentador, cuya potencia es controlada mediante un circuito electrónico compuesto por un detector de cruce por cero de la onda AC de alimentación y un sistema formado por un elemento TRIAC y su circuito de accionamiento. De manera análoga se expone el sistema de instrumentación para la presión de los gases en el circuito, variable que oscila en valores próximos a 3 atmosferas, para ello es empleado un sensor de presión con salida en corriente mediante bucle 4-20 mA, y un convertidor simple corriente a tensión para la entrada al sistema de adquisición de datos. Consecuentemente se presenta el esquema y componentes necesarios para la canalización, calentamiento y humidificación de los gases empleados en el proceso así como la situación de los sensores y actuadores. Por último el trabajo expone la relación de algoritmos desarrollados y un apéndice con información relativa al software Labview. ABTRACT This document presents a study about the operation and applications of PEM fuel cells (Proton exchange membrane fuel cells), fed with pure hydrogen and oxygen obtained from compressed air. Having evaluated the process of these cells and the variables involved on it, such as pressure, humidity and temperature, there is a variety of methods for implementing their control and to set up them around optimal values for greater efficiency in the process. Taking as primary process variable the temperature, and exposing its correct values around 80 degrees centigrade, between which must be placed, is carried out a model of the heating process and the temperature evolution related with the resistive heater power on the complex frequency domain, and is implemented a measuring system with thermocouple sensor type K performing a almost linear response. The differential signal measured by the sensor is amplified through INA2126 instrumentation amplifiers, and is developed a cold junction error correction algorithm (error produced by the inclusion of additional metals of connectors on the thermocouple effect). Data from the test concerning the temperature measurement system are included , including deviations or error regarding the ideal values of measurement. For data acquisition and implementation of control algorithms, is used a PC with LabVIEW software from National Instruments, which makes programming intuitive, versatile, visual, and useful to perform simple user interfaces. The connection between the instrumentation and control hardware of the cell and the PC interface is via a USB data acquisition NI 6800 that has a large number of analog inputs and outputs. Once stored the samples of the measured signal, and correct the error noted above junction, is implemented a software controller PID (proportional-integral-derivative), which is presented as one of the best methods for their programming simplicity and effectiveness for the control of such variables. To evaluate the performance of the system are presented simulations using Matlab and Simulink software thereby determining the best strategies to develop PID control, and possible outcomes of the process. As fluid heating system, is employed a heater resistor element whose power is controlled by an electronic circuit comprising a zero crossing detector of the AC power wave and a system consisting of a Triac and its drive circuit. As made with temperature variable it is developed an instrumentation system for gas pressure in the circuit, variable ranging in values around 3 atmospheres, it is employed a pressure sensor with a current output via 4-20 mA loop, and a single current to voltage converter to adequate the input to the data acquisition system. Consequently is developed the scheme and components needed for circulation, heating and humidification of the gases used in the process as well as the location of sensors and actuators. Finally the document presents the list of algorithms and an appendix with information about Labview software.
Resumo:
We present an adaptive unequal error protection (UEP) strategy built on the 1-D interleaved parity Application Layer Forward Error Correction (AL-FEC) code for protecting the transmission of stereoscopic 3D video content encoded with Multiview Video Coding (MVC) through IP-based networks. Our scheme targets the minimization of quality degradation produced by packet losses during video transmission in time-sensitive application scenarios. To that end, based on a novel packet-level distortion model, it selects in real time the most suitable packets within each Group of Pictures (GOP) to be protected and the most convenient FEC technique parameters, i.e., the size of the FEC generator matrix. In order to make these decisions, it considers the relevance of the packet, the behavior of the channel, and the available bitrate for protection purposes. Simulation results validate both the distortion model introduced to estimate the importance of packets and the optimization of the FEC technique parameter values.
Resumo:
An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the “best estimator” of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results
Resumo:
This paper proposes the use of Factored Translation Models (FTMs) for improving a Speech into Sign Language Translation System. These FTMs allow incorporating syntactic-semantic information during the translation process. This new information permits to reduce significantly the translation error rate. This paper also analyses different alternatives for dealing with the non-relevant words. The speech into sign language translation system has been developed and evaluated in a specific application domain: the renewal of Identity Documents and Driver’s License. The translation system uses a phrase-based translation system (Moses). The evaluation results reveal that the BLEU (BiLingual Evaluation Understudy) has improved from 69.1% to 73.9% and the mSER (multiple references Sign Error Rate) has been reduced from 30.6% to 24.8%.
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.
Resumo:
ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.
Resumo:
We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.
Resumo:
This paper presents a new methodology to build parametric models to estimate global solar irradiation adjusted to specific on-site characteristics based on the evaluation of variable im- portance. Thus, those variables higly correlated to solar irradiation on a site are implemented in the model and therefore, different models might be proposed under different climates. This methodology is applied in a study case in La Rioja region (northern Spain). A new model is proposed and evaluated on stability and accuracy against a review of twenty-two already exist- ing parametric models based on temperatures and rainfall in seventeen meteorological stations in La Rioja. The methodology of model evaluation is based on bootstrapping, which leads to achieve a high level of confidence in model calibration and validation from short time series (in this case five years, from 2007 to 2011). The model proposed improves the estimates of the other twenty-two models with average mean absolute error (MAE) of 2.195 MJ/m2 day and average confidence interval width (95% C.I., n=100) of 0.261 MJ/m2 day. 41.65% of the daily residuals in the case of SIAR and 20.12% in that of SOS Rioja fall within the uncertainty tolerance of the pyranometers of the two networks (10% and 5%, respectively). Relative differences between measured and estimated irradiation on an annual cumulative basis are below 4.82%. Thus, the proposed model might be useful to estimate annual sums of global solar irradiation, reaching insignificant differences between measurements from pyranometers.
Resumo:
This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.