926 resultados para Maximum available power


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of modular or ‘micro’ maximum power point tracking (MPPT) converters at module level in series association, commercially known as “power optimizers”, allows the individual adaptation of each panel to the load, solving part of the problems related to partial shadows and different tilt and/or orientation angles of the photovoltaic (PV) modules. This is particularly relevant in building integrated PV systems. This paper presents useful behavioural analytical studies of cascade MPPT converters and evaluation test results of a prototype developed under a Spanish national research project. On the one hand, this work focuses on the development of new useful expressions which can be used to identify the behaviour of individual MPPT converters applied to each module and connected in series, in a typical grid-connected PV system. On the other hand, a novel characterization method of MPPT converters is developed, and experimental results of the prototype are obtained: when individual partial shading is applied, and they are connected in a typical grid connected PV array

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variable nature of the irradiance can produce significant fluctuations in the power generated by large grid-connected photovoltaic (PV) plants. Experimental 1 s data were collected throughout a year from six PV plants, 18 MWp in total. Then, the dependence of short (below 10 min) power fluctuation on PV plant size has been investigated. The analysis focuses on the study of fluctuation frequency as well as the maximum fluctuation value registered. An analytic model able to describe the frequency of a given fluctuation for a certain day is proposed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an envelope amplifier solution for envelope elimination and restoration (EER), that consists of a series combination of a switch-mode power supply (SMPS), based on three-level voltage cells and a linear regulator. This cell topology offers several advantages over a previously presented envelope amplifier based on a different multilevel topology (two-level voltage cells). The topology of the multilevel converter affects to the whole design of the envelope amplifier and a comparison between both design alternatives regarding the size, complexity and the efficiency of the solution is done. Both envelope amplifier solutions have a bandwidth of 2 MHz with an instantaneous maximum power of 50 W. It is also analyzed the linearity of the three-level cell solution, with critical importance in the EER technique implementation. Additionally, considerations to optimize the design of the envelope amplifier and experimental comparison between both cell topologies are included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, calculus of variations and combined blade element and momentum theory (BEMT) are used to demonstrate that, in hover, when neither root nor tip losses are considered; the rotor, which minimizes the total power (MPR), generates an induced velocity that varies linearly along the blade span. The angle of attack of every blade element is constant and equal to its optimum value. The traditional ideal twist (ITR) and optimum (OR) rotors are revisited in the context of this variational framework. Two more optimum rotors are obtained considering root and tip losses, the ORL, and the MPRL. A comparison between these five rotors is presented and discussed. The MPR and MPRL present a remarkable saving of power for low values of both thrust coefficient and maximum aerodynamic efficiency. The result obtained can be exploited to improve the aerodynamic behaviour of rotary wing micro air vehicles (MAV). A comparison with experimental results obtained from the literature is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the correlation between the fluctuations of the electrical power generated by the ensemble of 70 DC/AC inverters from a 45.6 MW PV plant. The use of real electrical power time series from a large collection of photovoltaic inverters of a same plant is an impor- tant contribution in the context of models built upon simplified assumptions to overcome the absence of such data. This data set is divided into three different fluctuation categories with a clustering proce- dure which performs correctly with the clearness index and the wavelet variances. Afterwards, the time dependent correlation between the electrical power time series of the inverters is esti- mated with the wavelet transform. The wavelet correlation depends on the distance between the inverters, the wavelet time scales and the daily fluctuation level. Correlation values for time scales below one minute are low without dependence on the daily fluctuation level. For time scales above 20 minutes, positive high correlation values are obtained, and the decay rate with the distance depends on the daily fluctuation level. At intermediate time scales the correlation depends strongly on the daily fluctuation level. The proposed methods have been implemented using free software. Source code is available as supplementary material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deorbit, power generation, and thrusting performances of a bare thin-tape tether and an insulated tether with a spherical electron collector are compared for typical conditions in low-Earth orbit and common values of length L = 4−20 km and cross-sectional area of the tether A = 1−5 mm2. The relative performance of moderately large spheres, as compared with bare tapes, improves but still lags as one moves from deorbiting to power generation and to thrusting: Maximum drag in deorbiting requires maximum current and, thus, fully reflects on anodic collection capability, whereas extracting power at a load or using a supply to push current against the motional field requires reduced currents. The relative performance also improves as one moves to smaller A, which makes the sphere approach the limiting short-circuit current, and at greater L, with the higher bias only affecting moderately the already large bare-tape current. For a 4-m-diameter sphere, relative performances range from 0.09 sphere-to-bare tether drag ratio for L = 4 km and A = 5 mm2 to 0.82 thrust–efficiency ratio for L = 20 km and A = 1 mm2. Extremely large spheres collecting the short-circuit current at zero bias at daytime (diameters being about 14 m for A = 1 mm2 and 31 m for A = 5 mm2) barely outperform the bare tape for L = 4 km and are still outperformed by the bare tape for L = 20 km in both deorbiting and power generation; these large spheres perform like the bare tape in thrusting. In no case was sphere or sphere-related hardware taken into account in evaluating system mass, which would have reduced the sphere performances even further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se presenta una nueva aproximación para la realización de mapas de calidad del aire, con objeto de que esta variable del medio físico pueda ser tenida en cuenta en los procesos de planificación física o territorial. La calidad del aire no se considera normalmente en estos procesos debido a su composición y a la complejidad de su comportamiento, así como a la dificultad de contar con información fiable y contrastada. Además, la variabilidad espacial y temporal de las medidas de calidad del aire hace que sea difícil su consideración territorial y exige la georeferenciación de la información. Ello implica la predicción de medidas para lugares del territorio donde no existen datos. Esta tesis desarrolla un modelo geoestadístico para la predicción de valores de calidad del aire en un territorio. El modelo propuesto se basa en la interpolación de las medidas de concentración de contaminantes registradas en las estaciones de monitorización, mediante kriging ordinario, previa homogeneización de estos datos para eliminar su carácter local. Con el proceso de eliminación del carácter local, desaparecen las tendencias de las series muestrales de datos debidas a las variaciones temporales y espaciales de la calidad del aire. La transformación de los valores de calidad del aire en cantidades independientes del lugar de muestreo, se realiza a través de parámetros de uso del suelo y de otras variables características de la escala local. Como resultado, se obtienen unos datos de entrada espacialmente homogéneos, que es un requisito fundamental para la utilización de cualquier algoritmo de interpolación, en concreto, del kriging ordinario. Después de la interpolación, se aplica una retransformación de los datos para devolver el carácter local al mapa final. Para el desarrollo del modelo, se ha elegido como área de estudio la Comunidad de Madrid, por la disponibilidad de datos reales. Estos datos, valores de calidad del aire y variables territoriales, se utilizan en dos momentos. Un momento inicial, donde se optimiza la selección de los parámetros más adecuados para la eliminación del carácter local de las medidas y se desarrolla cada una de las etapas del modelo. Y un segundo momento, en el que se aplica en su totalidad el modelo desarrollado y se contrasta su eficacia predictiva. El modelo se aplica para la estimación de los valores medios y máximos de NO2 del territorio de estudio. Con la implementación del modelo propuesto se acomete la territorialización de los datos de calidad del aire con la reducción de tres factores clave para su efectiva integración en la planificación territorial o en el proceso de toma de decisiones asociado: incertidumbre, tiempo empleado para generar la predicción y recursos (datos y costes) asociados. El modelo permite obtener una predicción de valores del contaminante objeto de análisis en unas horas, frente a los periodos de modelización o análisis requeridos por otras metodologías. Los recursos necesarios son mínimos, únicamente contar con los datos de las estaciones de monitorización del territorio que, normalmente, están disponibles en las páginas web viii institucionales de los organismos gestores de las redes de medida de la calidad del aire. Por lo que respecta a las incertidumbres de la predicción, puede decirse que los resultados del modelo propuesto en esta tesis son estadísticamente muy correctos y que los errores medios son, en general, similares o menores que los encontrados con la aplicación de las metodologías existentes. ABSTRACT This thesis presents a new approach for mapping air quality, so that this variable of physical environment can be taken into account in physical or territorial planning. Ambient air quality is not normally considered in territorial planning mainly due to the complexity of its composition and behavior and the difficulty of counting with reliable and contrasted information. In addition, the wide spatial and temporal variability of the measurements of air quality makes his territorial consideration difficult and requires georeferenced information. This involves predicting measurements in the places of the territory where there are no data. This thesis develops a geostatistical model for predicting air quality values in a territory. The proposed model is based on the interpolation of measurements of pollutants from the monitoring stations, using ordinary kriging, after a detrending or removal of the local character of sampling values process. With the detrending process, the local character of the time series of sampling data, due to temporal and spatial variations of air quality, is removed. The transformation of the air quality values into site-independent quantities is performed using land use parameters and other characteristic parameters of local scale. This detrending of the monitoring data process results in a spatial homogeneous input set which is a prerequisite for a correct use of any interpolation algorithm, particularly, ordinary kriging. After the interpolation step, a retrending or retransformation is applied in order to incorporate the local character in the final map at places where no monitoring data is available. For the development of this model, the Community of Madrid is chosen as study area, because of the availability of actual data. These data, air quality values and local parameters, are used in two moments. A starting point, to optimize the selection of the most suitable indicators for the detrending process and to develop each one of the model stages. And a second moment, to fully implement the developed model and to evaluate its predictive power. The model is applied to estimate the average and maximum values of NO2 in the study territory. With the implementation of the proposed model, the territorialization of air quality data is undertaken with the reduction in three key factors for the effective integration of this parameter in territorial planning or in the associated decision making process: uncertainty, time taken to generate the prediction and associated resources (data and costs). This model allows the prediction of pollutant values in hours, compared to the implementation time periods required for other modeling or analysis methodologies. The required resources are also minimal, only having data from monitoring stations in the territory, that are normally available on institutional websites of the authorities responsible for air quality networks control and management. With regard to the prediction uncertainties, it can be concluded that the results of the proposed model are statistically very accurate and the mean errors are generally similar to or lower than those found with the application of existing methodologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar thermal power plants are usually installed in locations with high yearly average solar radiation, often deserts. In such conditions, cooling water required for thermodynamic cycles is rarely available. Moreover, when solar radiation is high, ambient temperature is very high as well; this leads to excessive condensation temperature, especially when air-condensers are used, and decreases the plant efficiency. However, temperature variation in deserts is often very high, which drives to relatively low temperatures during the night. This fact can be exploited with the use of a closed cooling system, so that the coolant (water) is chilled during the night and store. Chilled water is then used during peak temperature hours to cool the condenser (dry cooling), thus enhancing power output and efficiency. The present work analyzes the performance improvement achieved by night thermal cool storage, compared to its equivalent air cooled power plant. Dry cooling is proved to be energy-effective for moderately high day–night temperature differences (20 °C), often found in desert locations. The storage volume requirement for different power plant efficiencies has also been studied, resulting on an asymptotic tendency.