925 resultados para Data uncertainty


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An uncertainty propagation methodology based on Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties in 235,238 U, 239 Pu and Scattering Thermal Library for Hydrogen in water. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to test the present status of Evaluated Nuclear Decay and Fission Yield Data Libraries to predict decay heat and delayed neutron emission rate, average neutron energy and neutron delayed spectra after a neutron fission pulse. Calculations are performed with JEFF-3.1.1 and ENDF/B-VII.1, and these are compared with experimental values. An uncertainty propagation assessment of the current nuclear data uncertainties is performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainties on the isotopic composition throughout the burnup due to the nuclear data uncertainties are analysed. The different sources of uncertainties: decay data, fission yield and cross sections; are propagated individually, and their effect assessed. Two applications are studied: EFIT (an ADS-like reactor) and ESFR (Sodium Fast Reactor). The impact of the uncertainties on cross sections provided by the EAF-2010, SCALE6.1 and COMMARA-2.0 libraries are compared. These Uncertainty Quantification (UQ) studies have been carried out with a Monte Carlo sampling approach implemented in the depletion/activation code ACAB. Such implementation has been improved to overcome depletion/activation problems with variations of the neutron spectrum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Propagation of nuclear data uncertainties in reactor calculations is interesting for design purposes and libraries evaluation. Previous versions of the GRS XSUSA library propagated only neutron cross section uncertainties. We have extended XSUSA uncertainty assessment capabilities by including propagation of fission yields and decay data uncertainties due to the their relevance in depletion simulations. We apply this extended methodology to the UAM6 PWR Pin-Cell Burnup Benchmark, which involves uncertainty propagation through burnup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and suppose a possible source of uncertainty that must be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties trough depletion in order to complete the XSUSA uncertainty assessment capabilities. A simple test to probe the methodology is defined and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction at ungauged sites is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. Regression models relate physiographic and climatic basin characteristics to flood quantiles, which can be estimated from observed data at gauged sites. However, these models assume linear relationships between variables Prediction intervals are estimated by the variance of the residuals in the estimated model. Furthermore, the effect of the uncertainties in the explanatory variables on the dependent variable cannot be assessed. This paper presents a methodology to propagate the uncertainties that arise in the process of predicting flood quantiles at ungauged basins by a regression model. In addition, Bayesian networks were explored as a feasible tool for predicting flood quantiles at ungauged sites. Bayesian networks benefit from taking into account uncertainties thanks to their probabilistic nature. They are able to capture non-linear relationships between variables and they give a probability distribution of discharges as result. The methodology was applied to a case study in the Tagus basin in Spain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An uncertainty propagation methodology based on the Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties. The importance of the nuclear data uncertainties for 235,238 U, 239 Pu, and the thermal scattering library for hydrogen in water is analyzed. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El autor ha trabajado como parte del equipo de investigación en mediciones de viento en el Centro Nacional de Energías Renovables (CENER), España, en cooperación con la Universidad Politécnica de Madrid y la Universidad Técnica de Dinamarca. El presente reporte recapitula el trabajo de investigación realizado durante los últimos 4.5 años en el estudio de las fuentes de error de los sistemas de medición remota de viento, basados en la tecnología lidar, enfocado al error causado por los efectos del terreno complejo. Este trabajo corresponde a una tarea del paquete de trabajo dedicado al estudio de sistemas remotos de medición de viento, perteneciente al proyecto de intestigación europeo del 7mo programa marco WAUDIT. Adicionalmente, los datos de viento reales han sido obtenidos durante las campañas de medición en terreno llano y terreno complejo, pertenecientes al también proyecto de intestigación europeo del 7mo programa marco SAFEWIND. El principal objetivo de este trabajo de investigación es determinar los efectos del terreno complejo en el error de medición de la velocidad del viento obtenida con los sistemas de medición remota lidar. Con este conocimiento, es posible proponer una metodología de corrección del error de las mediciones del lidar. Esta metodología está basada en la estimación de las variaciones del campo de viento no uniforme dentro del volumen de medición del lidar. Las variaciones promedio del campo de viento son predichas a partir de los resultados de las simulaciones computacionales de viento RANS, realizadas para el parque experimental de Alaiz. La metodología de corrección es verificada con los resultados de las simulaciones RANS y validadas con las mediciones reales adquiridas en la campaña de medición en terreno complejo. Al inicio de este reporte, el marco teórico describiendo el principio de medición de la tecnología lidar utilizada, es presentado con el fin de familiarizar al lector con los principales conceptos a utilizar a lo largo de este trabajo. Posteriormente, el estado del arte es presentado en donde se describe los avances realizados en el desarrollo de la la tecnología lidar aplicados al sector de la energía eólica. En la parte experimental de este trabajo de investigación se ha estudiado los datos adquiridos durante las dos campañas de medición realizadas. Estas campañas has sido realizadas en terreno llano y complejo, con el fin de complementar los conocimiento adquiridos en casa una de ellas y poder comparar los efectos del terreno en las mediciones de viento realizadas con sistemas remotos lidar. La primer campaña experimental se desarrollo en terreno llano, en el parque de ensayos de aerogeneradores H0vs0re, propiedad de DTU Wind Energy (anteriormente Ris0). La segunda campaña experimental se llevó a cabo en el parque de ensayos de aerogeneradores Alaiz, propiedad de CENER. Exactamente los mismos dos equipos lidar fueron utilizados en estas campañas, haciendo de estos experimentos altamente relevantes en el contexto de evaluación del recurso eólico. Un equipo lidar está basado en tecnología de onda continua, mientras que el otro está basado en tecnología de onda pulsada. La velocidad del viento fue medida, además de con los equipos lidar, con anemómetros de cazoletas, veletas y anemómetros verticales, instalados en mástiles meteorológicos. Los sensores del mástil meteorológico son considerados como las mediciones de referencia en el presente estudio. En primera instancia, se han analizado los promedios diez minútales de las medidas de viento. El objetivo es identificar las principales fuentes de error en las mediciones de los equipos lidar causadas por diferentes condiciones atmosféricas y por el flujo no uniforme de viento causado por el terreno complejo. El error del lidar ha sido estudiado como función de varias propiedades estadísticas del viento, como lo son el ángulo vertical de inclinación, la intensidad de turbulencia, la velocidad vertical, la estabilidad atmosférica y las características del terreno. El propósito es usar este conocimiento con el fin de definir criterios de filtrado de datos. Seguidamente, se propone una metodología para corregir el error del lidar causado por el campo de viento no uniforme, producido por la presencia de terreno complejo. Esta metodología está basada en el análisis matemático inicial sobre el proceso de cálculo de la velocidad de viento por los equipos lidar de onda continua. La metodología de corrección propuesta hace uso de las variaciones de viento calculadas a partir de las simulaciones RANS realizadas para el parque experimental de Alaiz. Una ventaja importante que presenta esta metodología es que las propiedades el campo de viento real, presentes en las mediciones instantáneas del lidar de onda continua, puede dar paso a análisis adicionales como parte del trabajo a futuro. Dentro del marco del proyecto, el trabajo diario se realizó en las instalaciones de CENER, con supervisión cercana de la UPM, incluyendo una estancia de 1.5 meses en la universidad. Durante esta estancia, se definió el análisis matemático de las mediciones de viento realizadas por el equipo lidar de onda continua. Adicionalmente, los efectos del campo de viento no uniforme sobre el error de medición del lidar fueron analíticamente definidos, después de asumir algunas simplificaciones. Adicionalmente, durante la etapa inicial de este proyecto se desarrollo una importante trabajo de cooperación con DTU Wind Energy. Gracias a esto, el autor realizó una estancia de 1.5 meses en Dinamarca. Durante esta estancia, el autor realizó una visita a la campaña de medición en terreno llano con el fin de aprender los aspectos básicos del diseño de campañas de medidas experimentales, el estudio del terreno y los alrededores y familiarizarse con la instrumentación del mástil meteorológico, el sistema de adquisición y almacenamiento de datos, así como de el estudio y reporte del análisis de mediciones. ABSTRACT The present report summarizes the research work performed during last 4.5 years of investigation on the sources of lidar bias due to complex terrain. This work corresponds to one task of the remote sensing work package, belonging to the FP7 WAUDIT project. Furthermore, the field data from the wind velocity measurement campaigns of the FP7 SafeWind project have been used in this report. The main objective of this research work is to determine the terrain effects on the lidar bias in the measured wind velocity. With this knowledge, it is possible to propose a lidar bias correction methodology. This methodology is based on an estimation of the wind field variations within the lidar scan volume. The wind field variations are calculated from RANS simulations performed from the Alaiz test site. The methodology is validated against real scale measurements recorded during an eight month measurement campaign at the Alaiz test site. Firstly, the mathematical framework of the lidar sensing principle is introduced and an overview of the state of the art is presented. The experimental part includes the study of two different, but complementary experiments. The first experiment was a measurement campaign performed in flat terrain, at DTU Wind Energy H0vs0re test site, while the second experiment was performed in complex terrain at CENER Alaiz test site. Exactly the same two lidar devices, based on continuous wave and pulsed wave systems, have been used in the two consecutive measurement campaigns, making this a relevant experiment in the context of wind resource assessment. The wind velocity was sensed by the lidars and standard cup anemometry and wind vanes (installed on a met mast). The met mast sensors are considered as the reference wind velocity measurements. The first analysis of the experimental data is dedicated to identify the main sources of lidar bias present in the 10 minute average values. The purpose is to identify the bias magnitude introduced by different atmospheric conditions and by the non-uniform wind flow resultant of the terrain irregularities. The lidar bias as function of several statistical properties of the wind flow like the tilt angle, turbulence intensity, vertical velocity, atmospheric stability and the terrain characteristics have been studied. The aim of this exercise is to use this knowledge in order to define useful lidar bias data filters. Then, a methodology to correct the lidar bias caused by non-uniform wind flow is proposed, based on the initial mathematical analysis of the lidar measurements. The proposed lidar bias correction methodology has been developed focusing on the the continuous wave lidar system. In a last step, the proposed lidar bias correction methodology is validated with the data of the complex terrain measurement campaign. The methodology makes use of the wind field variations obtained from the RANS analysis. The results are presented and discussed. The advantage of this methodology is that the wind field properties at the Alaiz test site can be studied with more detail, based on the instantaneous measurements of the CW lidar. Within the project framework, the daily basis work has been done at CENER, with close guidance and support from the UPM, including an exchange period of 1.5 months. During this exchange period, the mathematical analysis of the lidar sensing of the wind velocity was defined. Furthermore, the effects of non-uniform wind fields on the lidar bias were analytically defined, after making some assumptions for the sake of simplification. Moreover, there has been an important cooperation with DTU Wind Energy, where a secondment period of 1.5 months has been done as well. During the secondment period at DTU Wind Energy, an important introductory learning has taken place. The learned aspects include the design of an experimental measurement campaign in flat terrain, the site assessment study of obstacles and terrain conditions, the data acquisition and processing, as well as the study and reporting of the measurement analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic flow time series data are usually high dimensional and very complex. Also they are sometimes imprecise and distorted due to data collection sensor malfunction. Additionally, events like congestion caused by traffic accidents add more uncertainty to real-time traffic conditions, making traffic flow forecasting a complicated task. This article presents a new data preprocessing method targeting multidimensional time series with a very high number of dimensions and shows its application to real traffic flow time series from the California Department of Transportation (PEMS web site). The proposed method consists of three main steps. First, based on a language for defining events in multidimensional time series, mTESL, we identify a number of types of events in time series that corresponding to either incorrect data or data with interference. Second, each event type is restored utilizing an original method that combines real observations, local forecasted values and historical data. Third, an exponential smoothing procedure is applied globally to eliminate noise interference and other random errors so as to provide good quality source data for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.