993 resultados para Systematic errors


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diplomityön tarkoituksena on kehittää tietokoneohjelma putkilämmönsiirtimen vaippapuolen painehäviön laskemiseksi. Ohjelmalla voidaan varmistaa lämmönsiirtimen mitoitusvaiheessa, että vaippapuolen painehäviö ei ylitä sallittuja rajoja. Ohjelmatäydentää olemassa olevia mitoitusohjelmia. Tässä diplomityössä käsitellään ainoastaan höyryvoimalaitosprosesseissa käytettäviä putkilämmönsiirtimiä. Työn kirjallisessa osassa on selvitetty periaate höyryvoimalaitosprosessista ja siinä käytettävistä putkilämmönsiirtimistä sekä esitetty putkilämmönsiirtimien rakenne, yleinen suunnittelu ja lämpö- ja virtaustekninen mitoitus. Painehäviön laskennassa käytetyt ja lämpö- ja virtausteknistä mitoitusta käsittelevässä kappaleessa esitetyt yhtälöt perustuvat Bell-Delawaren menetelmään. Painehäviönlaskentaohjelma on toteutettu hyväksikäyttäen Microsoft Excel taulukkolaskentaa ja Visual Basic -ohjelmointikieltä. Painehäviön laskenta perustuu segmenttivälilevyillä varustetun putkilämmönsiirtimen vaippapuolen yksifaasivirtaukseen. Lämmönsiirtimen lauhdutinosan painehäviö oletetaan merkityksettömäksi, joten kokonaispainehäviö muodostuu höyryn- ja lauhteenjäähdyttimessä. Kehitetty ohjelma on suunniteltu erityisesti lauhteenjäähdyttimessä muodostuvan painehäviön laskentaan. Ohjelmalla laskettuja painehäviön arvoja on verrattu todellisesta lämmönsiirtimestä mitattuihin arvoihin. Lasketut arvotvastaavat hyvin mittaamalla saatuja, eikä tuloksissa ilmene mitään systemaattista virhettä. Ohjelma on valmis käytettäväksi putkilämmönsiirtimien mitoitustyökaluna. Diplomityön pohjalta on tehty ehdotukset ohjelman edelleen kehittämiseksi.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a detailed evaluation of the seasonal performance of the Community Multiscale Air Quality (CMAQ) modelling system and the PSU/NCAR meteorological model coupled to a new Numerical Emission Model for Air Quality (MNEQA). The combined system simulates air quality at a fine resolution (3 km as horizontal resolution and 1 h as temporal resolution) in north-eastern Spain, where problems of ozone pollution are frequent. An extensive database compiled over two periods, from May to September 2009 and 2010, is used to evaluate meteorological simulations and chemical outputs. Our results indicate that the model accurately reproduces hourly and 1-h and 8-h maximum ozone surface concentrations measured at the air quality stations, as statistical values fall within the EPA and EU recommendations. However, to further improve forecast accuracy, three simple bias-adjustment techniques mean subtraction (MS), ratio adjustment (RA), and hybrid forecast (HF) based on 10 days of available comparisons are applied. The results show that the MS technique performed better than RA or HF, although all the bias-adjustment techniques significantly reduce the systematic errors in ozone forecasts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The visual angle that is projected by an object (e.g. a ball) on the retina depends on the object's size and distance. Without further information, however, the visual angle is ambiguous with respect to size and distance, because equal visual angles can be obtained from a big ball at a longer distance and a smaller one at a correspondingly shorter distance. Failure to recover the true 3D structure of the object (e.g. a ball's physical size) causing the ambiguous retinal image can lead to a timing error when catching the ball. Two opposing views are currently prevailing on how people resolve this ambiguity when estimating time to contact. One explanation challenges any inference about what causes the retinal image (i.e. the necessity to recover this 3D structure), and instead favors a direct analysis of optic flow. In contrast, the second view suggests that action timing could be rather based on obtaining an estimate of the 3D structure of the scene. With the latter, systematic errors will be predicted if our inference of the 3D structure fails to reveal the underlying cause of the retinal image. Here we show that hand closure in catching virtual balls is triggered by visual angle, using an assumption of a constant ball size. As a consequence of this assumption, hand closure starts when the ball is at similar distance across trials. From that distance on, the remaining arrival time, therefore, depends on ball's speed. In order to time the catch successfully, closing time was coupled with ball's speed during the motor phase. This strategy led to an increased precision in catching but at the cost of committing systematic errors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The conventional curriculum of Analytical Chemistry undergraduate courses emphasizes the introduction of techniques, methods and procedures used for instrumental analysis. All these concepts must be integrated into a sound conceptual framework to allow students to make appropriate decisions. Method calibration is one of the most critical parameters that has to be grasped since most analytical techniques depend on it for quantitative analysis. The conceptual understanding of calibration is not trivial for undergraduate students. External calibration is widely discussed during instrumental analysis courses. However, the understanding of the limitations of external calibration to correct some systematic errors is not directly derived from laboratory examples. The conceptual understanding of other calibration methods (standard addition, matrix matching, and internal standard) is imperative. The aim of this work is to present a simple experiment using grains (beans, corn and chickpeas) to explore different types of calibration methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The correct quantification of blast caused by the fungus Magnaporthe oryzae on wheat (Triticum aestivum) spikes is an important component to understand the development of this disease aimed at its control. Visual quantification based on a diagrammatic scale can be a practical and efficient strategy that has already proven to be useful against several plant pathosystems, including diseases affecting wheat spikes like glume blotch and fusarium head blight. Spikes showing different disease severity values were collected from a wheat field with the aim of elaborating a diagrammatic scale to quantify blast severity on wheat spikes. The spikes were photographed and blast severity was determined by using resources of the software ImageJ. A diagrammatic scale was developed with the following disease severity values: 3.7, 7.5, 21.4, 30.5, 43.8, 57.3, 68.1, 86.0, and 100.0%. An asymptomatic spike was added to the scale. Scale validation was performed by eight people who estimated blast severity by using digitalized images of 40 wheat spikes. The precision and the accuracy of the evaluations varied according to the rater (0.82systematic errors in overestimating or underestimating the disease were not found among the raters, demonstrating that the developed scale is suitable to evaluate blast on wheat spikes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Université de Montréal

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette étude vise à tester la pertinence des images RSO - de moyenne et de haute résolution - à la caractérisation des types d’occupation du sol en milieu urbain. Elle s’est basée sur des approches texturales à partir des statistiques de deuxième ordre. Plus spécifiquement, on recherche les paramètres de texture les plus pertinents pour discriminer les objets urbains. Il a été utilisé à cet égard des images Radarsat-1 en mode fin en polarisation HH et Radarsat-2 en mode fin en double et quadruple polarisation et en mode ultrafin en polarisation HH. Les occupations du sol recherchées étaient le bâti dense, le bâti de densité moyenne, le bâti de densité faible, le bâti industriel et institutionnel, la végétation de faible densité, la végétation dense et l’eau. Les neuf paramètres de textures analysés ont été regroupés, en familles selon leur définition mathématique. Les paramètres de ressemblance/dissemblance regroupent l’Homogénéité, le Contraste, la Similarité et la Dissimilarité. Les paramètres de désordre sont l’Entropie et le Deuxième Moment Angulaire. L’Écart-Type et la Corrélation sont des paramètres de dispersion et la Moyenne est une famille à part. Il ressort des expériences que certaines combinaisons de paramètres de texture provenant de familles différentes utilisés dans les classifications donnent de très bons résultants alors que d’autres associations de paramètres de texture de définition mathématiques proches génèrent de moins bons résultats. Par ailleurs on constate que si l’utilisation de plusieurs paramètres de texture améliore les classifications, la performance de celle-ci plafonne à partir de trois paramètres. Malgré la bonne performance de cette approche basée sur la complémentarité des paramètres de texture, des erreurs systématiques dues aux effets cardinaux subsistent sur les classifications. Pour pallier à ce problème, il a été développé un modèle de compensation radiométrique basé sur la section efficace radar (SER). Une simulation radar à partir du modèle numérique de surface du milieu a permis d'extraire les zones de rétrodiffusion des bâtis et d'analyser les rétrodiffusions correspondantes. Une règle de compensation des effets cardinaux fondée uniquement sur les réponses des objets en fonction de leur orientation par rapport au plan d'illumination par le faisceau du radar a été mise au point. Des applications de cet algorithme sur des images RADARSAT-1 et RADARSAT-2 en polarisations HH, HV, VH, et VV ont permis de réaliser de considérables gains et d’éliminer l’essentiel des erreurs de classification dues aux effets cardinaux.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiconfiguration relativistic Dirac-Fock (MCDF) values have been computed for the first four ionization potentials (IPs) of element 104 (unnilquadium) and of the other group 4 elements (Ti, Zr, and Hf). Factors were calculated that allowed correction of the systematic errors between the MCDF IPs and the experimental IPs. Single "experimental" IPs evaluated in eV (to ± 0.1 eV) for element 104 are: [104(0),6.5]; [104( 1 + ),14.8]; [104(2 + ),23.8]; [104(3 + ),31.9]. Multiple experimental IPs evaluated in eV for element 104 are: [(0-2+ ),21.2±0.2]; [(0-3+ ),45.1 ±0.2]; [(0-4+ ),76.8±0.3].Our MCDF results track 11 of the 12 experimental single IPs studied for group 4 atoms and ions. The exception is Hf( 2 + ). We submit our calculated IP of 22.4 ± 0.2 eV as much more accurate than the value of 23.3 eV derived from experiment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the Radiative Atmospheric Divergence Using ARM Mobile Facility GERB and AMMA Stations (RADAGAST) project we calculate the divergence of radiative flux across the atmosphere by comparing fluxes measured at each end of an atmospheric column above Niamey, in the African Sahel region. The combination of broadband flux measurements from geostationary orbit and the deployment for over 12 months of a comprehensive suite of active and passive instrumentation at the surface eliminates a number of sampling issues that could otherwise affect divergence calculations of this sort. However, one sampling issue that challenges the project is the fact that the surface flux data are essentially measurements made at a point, while the top-of-atmosphere values are taken over a solid angle that corresponds to an area at the surface of some 2500 km2. Variability of cloud cover and aerosol loading in the atmosphere mean that the downwelling fluxes, even when averaged over a day, will not be an exact match to the area-averaged value over that larger area, although we might expect that it is an unbiased estimate thereof. The heterogeneity of the surface, for example, fixed variations in albedo, further means that there is a likely systematic difference in the corresponding upwelling fluxes. In this paper we characterize and quantify this spatial sampling problem. We bound the root-mean-square error in the downwelling fluxes by exploiting a second set of surface flux measurements from a site that was run in parallel with the main deployment. The differences in the two sets of fluxes lead us to an upper bound to the sampling uncertainty, and their correlation leads to another which is probably optimistic as it requires certain other conditions to be met. For the upwelling fluxes we use data products from a number of satellite instruments to characterize the relevant heterogeneities and so estimate the systematic effects that arise from the flux measurements having to be taken at a single point. The sampling uncertainties vary with the season, being higher during the monsoon period. We find that the sampling errors for the daily average flux are small for the shortwave irradiance, generally less than 5 W m−2, under relatively clear skies, but these increase to about 10 W m−2 during the monsoon. For the upwelling fluxes, again taking daily averages, systematic errors are of order 10 W m−2 as a result of albedo variability. The uncertainty on the longwave component of the surface radiation budget is smaller than that on the shortwave component, in all conditions, but a bias of 4 W m−2 is calculated to exist in the surface leaving longwave flux.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The too diverse representation of ENSO in a coupled GCM limits one’s ability to describe future change of its properties. Several studies pointed to the key role of atmosphere feedbacks in contributing to this diversity. These feedbacks are analyzed here in two simulations of a coupled GCM that differ only by the parameterization of deep atmospheric convection and the associated clouds. Using the Kerry–Emanuel (KE) scheme in the L’Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL CM4; KE simulation), ENSO has about the right amplitude, whereas it is almost suppressed when using the Tiedke (TI) scheme. Quantifying both the dynamical Bjerknes feedback and the heat flux feedback in KE, TI, and the corresponding Atmospheric Model Intercomparison Project (AMIP) atmosphere-only simulations, it is shown that the suppression of ENSO in TI is due to a doubling of the damping via heat flux feedback. Because the Bjerknes positive feedback is weak in both simulations, the KE simulation exhibits the right ENSO amplitude owing to an error compensation between a too weak heat flux feedback and a too weak Bjerknes feedback. In TI, the heat flux feedback strength is closer to estimates from observations and reanalysis, leading to ENSO suppression. The shortwave heat flux and, to a lesser extent, the latent heat flux feedbacks are the dominant contributors to the change between TI and KE. The shortwave heat flux feedback differences are traced back to a modified distribution of the large-scale regimes of deep convection (negative feedback) and subsidence (positive feedback) in the east Pacific. These are further associated with the model systematic errors. It is argued that a systematic and detailed evaluation of atmosphere feedbacks during ENSO is a necessary step to fully understand its simulation in coupled GCMs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The climatology of the OPA/ARPEGE-T21 coupled general circulation model (GCM) is presented. The atmosphere GCM has a T21 spectral truncation and the ocean GCM has a 2°×1.5° average resolution. A 50-year climatic simulation is performed using the OASIS coupler, without flux correction techniques. The mean state and seasonal cycle for the last 10 years of the experiment are described and compared to the corresponding uncoupled experiments and to climatology when available. The model reasonably simulates most of the basic features of the observed climate. Energy budgets and transports in the coupled system, of importance for climate studies, are assessed and prove to be within available estimates. After an adjustment phase of a few years, the model stabilizes around a mean state where the tropics are warm and resemble a permanent ENSO, the Southern Ocean warms and almost no sea-ice is left in the Southern Hemisphere. The atmospheric circulation becomes more zonal and symmetric with respect to the equator. Once those systematic errors are established, the model shows little secular drift, the small remaining trends being mainly associated to horizontal physics in the ocean GCM. The stability of the model is shown to be related to qualities already present in the uncoupled GCMs used, namely a balanced radiation budget at the top-of-the-atmosphere and a tight ocean thermocline.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe the HadGEM2 family of climate configurations of the Met Office Unified Model, MetUM. The concept of a model "family" comprises a range of specific model configurations incorporating different levels of complexity but with a common physical framework. The HadGEM2 family of configurations includes atmosphere and ocean components, with and without a vertical extension to include a well-resolved stratosphere, and an Earth-System (ES) component which includes dynamic vegetation, ocean biology and atmospheric chemistry. The HadGEM2 physical model includes improvements designed to address specific systematic errors encountered in the previous climate configuration, HadGEM1, namely Northern Hemisphere continental temperature biases and tropical sea surface temperature biases and poor variability. Targeting these biases was crucial in order that the ES configuration could represent important biogeochemical climate feedbacks. Detailed descriptions and evaluations of particular HadGEM2 family members are included in a number of other publications, and the discussion here is limited to a summary of the overall performance using a set of model metrics which compare the way in which the various configurations simulate present-day climate and its variability.