971 resultados para Round Robin Database Measurement Archive
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This document provides details of the transfer of the Norman Holme archive data held in the National Marine Biological Library onto a modern database, specifically Marine Recorder. A key part in the creation of the database was the retrieval of a large amount of information recorded in field notebooks and on loosely-bound sheets of paper. As this work involved amending, interpreting and updating the available information, it was felt that an accurate record of this process should exist to allow scientists of the future to be able to clearly link the modern database to the archive material. This document also provides details of external information sources that were used to enhance and qualify the historical interpretation, such as estimating volumes and species abundances.
Resumo:
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Resumo:
The university course timetabling problem involves assigning a given number of events into a limited number of timeslots and rooms under a given set of constraints; the objective is to satisfy the hard constraints (essential requirements) and minimize the violation of soft constraints (desirable requirements). In this study we employed a Dual-sequence Simulated Annealing (DSA) algorithm as an improvement algorithm. The Round Robin (RR) algorithm is used to control the selection of neighbourhood structures within DSA. The performance of our approach is tested over eleven benchmark datasets. Experimental results show that our approach is able to generate competitive results when compared with other state-of-the-art techniques.
Resumo:
"SAND87-0891, R1 and RD."
Resumo:
An international round robin study of the viscosity measurements and aging of fast pyrolysis bio-oil has been undertaken recently, and this work is an outgrowth from that effort. Two bio-oil samples were distributed to two laboratories for accelerated aging tests and to three laboratories of long-term aging studies. The accelerated aging test was defined as the change in viscosity of a sealed sample of bio-oil held for 24 h at 80 °C. The test was repeated 10 times over consecutive days to determine the intra-laboratory repeatability of the method. Other bio-oil samples were placed in storage at three temperatures, 21, 5, and -17 °C, for a period of up to 1 year to evaluate the change in viscosity. The variation in the results of the accelerated aging test was shown to be low within a given laboratory. The long-term aging studies showed that storage of a filtered bio-oil under refrigeration can minimize the amount of change in viscosity. The accelerated aging test gave a measure of change similar to that of 6-12 months of storage at room temperature for a filtered bio-oil. Filtration of solids was identified as a key contributor to improving the stability of the bio-oil as expressed by the viscosity based on results of the accelerated aging tests as well as long-term aging studies. Only the filtered bio-oil consistently gave useful results in the accelerated aging and long-term aging studies. The inconsistency suggests that better protocols need to be developed for sampling bio-oils. These results can be helpful in setting standards for use of bio-oil, which is just coming into the marketplace. © 2012 American Chemical Society.
Resumo:
Nanoindentation has become a common technique for measuring the hardness and elastic-plastic properties of materials, including coatings and thin films. In recent years, different nanoindenter instruments have been commercialised and used for this purpose. Each instrument is equipped with its own analysis software for the derivation of the hardness and reduced Young's modulus from the raw data. These data are mostly analysed through the Oliver and Pharr method. In all cases, the calibration of compliance and area function is mandatory. The present work illustrates and describes a calibration procedure and an approach to raw data analysis carried out for six different nanoindentation instruments through several round-robin experiments. Three different indenters were used, Berkovich, cube corner, spherical, and three standardised reference samples were chosen, hard fused quartz, soft polycarbonate, and sapphire. It was clearly shown that the use of these common procedures consistently limited the hardness and reduced the Young's modulus data spread compared to the same measurements performed using instrument-specific procedures. The following recommendations for nanoindentation calibration must be followed: (a) use only sharp indenters, (b) set an upper cut-off value for the penetration depth below which measurements must be considered unreliable, (c) perform nanoindentation measurements with limited thermal drift, (d) ensure that the load-displacement curves are as smooth as possible, (e) perform stiffness measurements specific to each instrument/indenter couple, (f) use Fq and Sa as calibration reference samples for stiffness and area function determination, (g) use a function, rather than a single value, for the stiffness and (h) adopt a unique protocol and software for raw data analysis in order to limit the data spread related to the instruments (i.e. the level of drift or noise, defects of a given probe) and to make the H and E r data intercomparable. © 2011 Elsevier Ltd.
Resumo:
One key step of the industrial development of a tidal energy device is the testing of scale prototype devices within a controlled laboratory environment. At present, there is no available experimental protocol which addresses in a quantitative manner the differences which can be expected between results obtained from the different types of facilities currently employed for this type of testing. As a consequence, where differences between results are found it has been difficult to confirm the extent to which these differences relate to the device performance or to the test facility type. In the present study, a comparative ”Round Robin” testing programme has been conducted as part of the EC FP VII MaRINET program in order to evaluate the impact of different experimental facilities on the test results. The aim of the trials was to test the same model tidal turbine in four different test facilities to explore the sensitivity of the results to the choice of facility. The facilities comprised two towing tanks, of very different size, and two circulating water channels. Performance assessments in terms of torque, drag and inflow speed showed very similar results in all facilities. However, expected differences between the different tank types (circulating and towing) were observed in the fluctuations of torque and drag measurements. The main facility parameters which can influence the behaviour of the turbine were identified; in particular the effect of blockage was shown to be significant in cases yielding for high thrust coefficients, even at relatively small blockage ratios.
Resumo:
A round-robin exercise was conducted within the CALEIDOSLIFE project. The participants were invited to assess the hazard posed by a substance, applying in silico methods and read-across approaches. The exercise was based on three endpoints: mutagenicity, bioconcentration factor and fish acute toxicity. Nine chemicals were assigned for each endpoint and the participants were invited to complete a specific questionnaire communicating their conclusions.The interesting aspect of this exercise is the justification behind the answers more than the final prediction in itself. Which tools were used? How did the approach selected affect the final answer?
Resumo:
Black et al. (2004) identified a systematic difference between LA–ICP–MS and TIMS measurements of 206Pb/238U in zircons, which they correlated with the incompatible trace element content of the zircon. We show that the offset between the LA–ICP–MS and TIMS measured 206Pb/238U correlates more strongly with the total radiogenic Pb than with any incompatible trace element. This suggests that the cause of the 206Pb/238U offset is related to differences in the radiation damage (alpha dose) between the reference and unknowns. We test this hypothesis in two ways. First, we show that there is a strong correlation between the difference in the LA–ICP–MS and TIMS measured 206Pb/238U and the difference in the alpha dose received by unknown and reference zircons. The LA–ICP–MS ages for the zircons we have dated can be as much as 5.1% younger than their TIMS age to 2.1% older, depending on whether the unknown or reference received the higher alpha dose. Second, we show that by annealing both reference and unknown zircons at 850 °C for 48 h in air we can eliminate the alpha-dose-induced differences in measured 206Pb/238U. This was achieved by analyzing six reference zircons a minimum of 16 times in two round robin experiments: the first consisting of unannealed zircons and the second of annealed grains. The maximum offset between the LA–ICP–MS and TIMS measured 206Pb/238U for the unannealed zircons was 2.3%, which reduced to 0.5% for the annealed grains, as predicted by within-session precision based on counting statistics. Annealing unknown zircons and references to the same state prior to analysis holds the promise of reducing the 3% external error for the measurement of 206Pb/238U of zircon by LA–ICP–MS, indicated by Klötzli et al. (2009), to better than 1%, but more analyses of annealed zircons by other laboratories are required to evaluate the true potential of the annealing method.
Resumo:
This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.
Resumo:
Within the ESA Climate Change Initiative (CCI) project Aerosol_cci (2010–2013), algorithms for the production of long-term total column aerosol optical depth (AOD) datasets from European Earth Observation sensors are developed. Starting with eight existing pre-cursor algorithms three analysis steps are conducted to improve and qualify the algorithms: (1) a series of experiments applied to one month of global data to understand several major sensitivities to assumptions needed due to the ill-posed nature of the underlying inversion problem, (2) a round robin exercise of "best" versions of each of these algorithms (defined using the step 1 outcome) applied to four months of global data to identify mature algorithms, and (3) a comprehensive validation exercise applied to one complete year of global data produced by the algorithms selected as mature based on the round robin exercise. The algorithms tested included four using AATSR, three using MERIS and one using PARASOL. This paper summarizes the first step. Three experiments were conducted to assess the potential impact of major assumptions in the various aerosol retrieval algorithms. In the first experiment a common set of four aerosol components was used to provide all algorithms with the same assumptions. The second experiment introduced an aerosol property climatology, derived from a combination of model and sun photometer observations, as a priori information in the retrievals on the occurrence of the common aerosol components. The third experiment assessed the impact of using a common nadir cloud mask for AATSR and MERIS algorithms in order to characterize the sensitivity to remaining cloud contamination in the retrievals against the baseline dataset versions. The impact of the algorithm changes was assessed for one month (September 2008) of data: qualitatively by inspection of monthly mean AOD maps and quantitatively by comparing daily gridded satellite data against daily averaged AERONET sun photometer observations for the different versions of each algorithm globally (land and coastal) and for three regions with different aerosol regimes. The analysis allowed for an assessment of sensitivities of all algorithms, which helped define the best algorithm versions for the subsequent round robin exercise; all algorithms (except for MERIS) showed some, in parts significant, improvement. In particular, using common aerosol components and partly also a priori aerosol-type climatology is beneficial. On the other hand the use of an AATSR-based common cloud mask meant a clear improvement (though with significant reduction of coverage) for the MERIS standard product, but not for the algorithms using AATSR. It is noted that all these observations are mostly consistent for all five analyses (global land, global coastal, three regional), which can be understood well, since the set of aerosol components defined in Sect. 3.1 was explicitly designed to cover different global aerosol regimes (with low and high absorption fine mode, sea salt and dust).
Resumo:
This work introduces the lines of research that the NGCPV project is pursuing and some of the first results obtained. Sponsored by the European Commission under the 7th Framework Program and NEDO (Japan) within the first collaborative call launched by both Bodies in the field of energy, NGCPV project aims at approaching the cost of the photovoltaic kWh to competitive prices in the framework of high concentration photovoltaics (CPV) by exploring the development and assessment of concentrator photovoltaic solar cells and modules, novel materials and new solar cell structures as well as methods and procedures to standardize measurement technology for concentrator photovoltaic cells and modules. More specific objectives we are facing are: (1) to manufacture a cell prototype with an efficiency of at least 45% and to undertake an experimental activity, (2) to manufacture a 35% module prototype and elaborate the roadmap towards the achievement of 40%, (3) to develop reliable characterization techniques for III-V materials and quantum structures, (4) to achieve and agreement within 5% in the characterization of CPV cells and modules in a round robin scheme, and (5) to evaluate the potential of new materials, devices technologies and quantum nanostructures to improve the efficiency of solar cells for CPV.