985 resultados para Round Robin Database Measurement Archive
Resumo:
C. W. Gümbel received nodules from J. Murray which were collected at a depth of 2740 fathoms, between Japan and the Sandwich Islands, by the "Challenger" Expedition. They were either round or long in shape, with a dull, dirty-brown coloured surface, and enclosed fragments of pumice-stone, and more rarely teeth of sharks or fragments of mussels. They were analysed by A. Schwager.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Purpose: Several attempts to determine the transit time of a high dose rate (HDR) brachytherapy unit have been reported in the literature with controversial results. The determination of the source speed is necessary to accurately calculate the transient dose in brachytherapy treatments. In these studies, only the average speed of the source was measured as a parameter for transit dose calculation, which does not account for the realistic movement of the source, and is therefore inaccurate for numerical simulations. The purpose of this work is to report the implementation and technical design of an optical fiber based detector to directly measure the instantaneous speed profile of a (192)Ir source in a Nucletron HDR brachytherapy unit. Methods: To accomplish this task, we have developed a setup that uses the Cerenkov light induced in optical fibers as a detection signal for the radiation source moving inside the HDR catheter. As the (192)Ir source travels between two optical fibers with known distance, the threshold of the induced signals are used to extract the transit time and thus the velocity. The high resolution of the detector enables the measurement of the transit time at short separation distance of the fibers, providing the instantaneous speed. Results: Accurate and high resolution speed profiles of the 192Ir radiation source traveling from the safe to the end of the catheter and between dwell positions are presented. The maximum and minimum velocities of the source were found to be 52.0 +/- 1.0 and 17.3 +/- 1:2 cm/s. The authors demonstrate that the radiation source follows a uniformly accelerated linear motion with acceleration of vertical bar a vertical bar = 113 cm/s(2). In addition, the authors compare the average speed measured using the optical fiber detector to those obtained in the literature, showing deviation up to 265%. Conclusions: To the best of the authors` knowledge, the authors directly measured for the first time the instantaneous speed profile of a radiation source in a HDR brachytherapy unit traveling from the unit safe to the end of the catheter and between interdwell distances. The method is feasible and accurate to implement on quality assurance tests and provides a unique database for efficient computational simulations of the transient dose. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3483780]
Resumo:
BACKGROUND: Straylight gives the appearance of a veil of light thrown over a person's retinal image when there is a strong light source present. We examined the reproducibility of the measurements by C-Quant, and assessed its correlation to characteristics of the eye and subjects' age. PARTICIPANTS AND METHODS: Five repeated straylight measurements were taken using the dominant eye of 45 healthy subjects (age 21-59) with a BCVA of 20/20: 14 emmetropic, 16 myopic, eight hyperopic and seven with astigmatism. We assessed the extent of reproducibility of straylight measures using the intraclass correlation coefficient. RESULTS: The mean straylight value of all measurements was 1.01 (SD 0.23, median 0.97, interquartile range 0.85-1.1). Per 10 years of age, straylight increased in average by 0.10 (95%CI 0.04 to 0.16, p < 0.01]. We found no independent association of refraction (range -5.25 dpt to +2 dpt) on straylight values (0.001; 95%CI -0.022 to 0.024, p = 0.92). Compared to emmetropic subjects, myopia reduced straylight (-.011; -0.024 to 0.02, p = 0.11), whereas higher straylight values (0.09; -0.01 to 0.20, p = 0.09) were observed in subjects with blue irises as compared to dark-colored irises when correcting for age. The intraclass correlation coefficient (ICC) of repeated measurements was 0.83 (95%CI 0.76 to 0.90). CONCLUSIONS: Our study showed that straylight measurements with the C-Quant had a high reproducibility, i.e. a lack of large intra-observer variability, making it appropriate to be applied in long-term follow-up studies assessing the long-term effect of surgical procedures on the quality of vision.
Resumo:
Background: The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114). Methods: 75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item. Results: 88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.Conclusions: Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.
Resumo:
HTPSELEX is a public database providing access to primary and derived data from high-throughput SELEX experiments aimed at characterizing the binding specificity of transcription factors. The resource is primarily intended to serve computational biologists interested in building models of transcription factor binding sites from large sets of binding sequences. The guiding principle is to make available all information that is relevant for this purpose. For each experiment, we try to provide accurate information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, assembled clone sequences (concatemers) and complete sets of in vitro selected protein-binding tags. In addition, we offer in-house derived binding sites models. HTPSELEX also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols. The FTP site contains the trace archives and database flatfiles. The web server offers user-friendly interfaces for viewing individual entries and quality-controlled download of SELEX sequence libraries according to a user-defined sequencing quality threshold. HTPSELEX is available from ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex/ and http://www.isrec.isb-sib.ch/htpselex.
Resumo:
BACKGROUND: Soft tissue sarcomas of the trunk wall (STS-TW) are usually studied together with soft tissue sarcomas of other locations. We report a study on STS-TW forming part of the French Sarcoma Group database. PATIENTS AND METHODS: Three hundred and forty-three adults were included. We carried out univariate and multivariate analysis for overall survival (OS), metastasis-free survival (MFS) and local recurrence-free survival (LRFS). RESULTS: Tumor locations were as follows: thoracic wall, 82.5%; abdominal wall, 12.3% and pelvic wall, 5.2%. Median tumor size was 6.0 cm. The most frequent tumor types were unclassified sarcoma (27.7%) and myogenic sarcoma (19.2%). A total of 44.6% of cases were grade 3. In all, 21.9% of patients had a previous medical history of radiotherapy (PHR). Median follow-up was 7.6 years. The 5-year OS, MFS and LRFS rates were 60.4%, 68.9% and 58.4%, respectively. Multivariate analysis retained PHR and grade for predicting LRFS and PHR, size and grade as prognostic factors of MFS. Factors influencing OS were age, size, PHR, depth, grade and surgical margins. The predictive factors of incomplete response were PHR, size and T3. CONCLUSIONS: Our results suggest similar classical prognostic factors as compared with sarcomas of other locations. However, a separate analysis of STS-TW revealed a significant poor prognosis subgroup of patients with PHR.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.
Resumo:
In recent decade customer loyalty programs have become very popular and almost every retail chain seems to have one. Through the loyalty programs companies are able to collect information about the customer behavior and to use this information in business and marketing management to guide decision making and resource allocation. The benefits for the loyalty program member are often monetary, which has an effect on the profitability of the loyalty program. Not all the loyalty program members are equally profitable, as some purchase products for the recommended retail price and some buy only discounted products. If the company spends similar amount of resources to all members, it can be seen that the customer margin is lower on the customer who bought only discounted products. It is vital for a company to measure the profitability of their members in order to be able to calculate the customer value. To calculate the customer value several different customer value metrics can be used. During the recent years especially customer lifetime value has received a lot of attention and it is seen to be superior against other customer value metrics. In this master’s thesis the customer lifetime value is implemented on the case company’s customer loyalty program. The data was collected from the customer loyalty program’s database and represents year 2012 on the Finnish market. The data was not complete to fully take advantage of customer lifetime value and as a conclusion it can be stated that a new key performance indicator of customer margin should be acquired in order to profitably drive the business of the customer loyalty program. Through the customer margin the company would be able to compute the customer lifetime value on regular basis enabling efficient resource allocation in marketing.
Resumo:
In its periodic declarations of domestic support to the WTO, the EU has progressively reduced its amber-box declarations in line with its changing system of farm support. Surprisingly, however, in 2007/08 it managed to more than halve its amber box compared with that of the previous year, easily achieving the reduction targets being touted in the Doha Round. This was largely due to a change in the calculations for fresh fruits and vegetables. These had been linked to the entry price system, which was not affected by the 2008 fruit and vegetables reform. Why the EU chose to make this change during the ongoing Doha Round negotiations remains unclear.
Resumo:
Extratropical cyclone lifecycles have been studied extensively with the aim of understanding the dynamical mechanisms involved in their development. Previous work has often been based on subjective analysis of individual case studies. Such case studies have contributed heavily to the generation of conceptual models of extratropical cyclones that provide a framework for understanding the dynamical evolution of cyclones. These conceptual models are widely used in educational meteorology courses throughout the world to illustrate the basic structure and evolution of extratropical cyclones. This article presents a database of extratropical cyclone composites which highlight the average structure and evolution of 20 years of extratropical cyclones, as opposed to individual case studies. The composite fields are achieved by combining a database containing cyclone tracks from the ERA-Interim reanalysis (1989-2009, 6 hourly) with the full 3D ERA-Interim reanalysis fields. Vertical and horizontal composites of cyclone structure for cyclones generated in the Atlantic and Pacific regions identifying features such as the relative positions of cold, warm and occluded fronts and their associated wind and cloud patterns are shown. In addition the evolution of cyclonic flows such as the warm and cold conveyor belts and dry intrusion are illustrated. A webpage containing an archive of the composited data is freely available for educational purposes.
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
A continuous tropospheric and stratospheric vertically resolved ozone time series, from 1850 to 2099, has been generated to be used as forcing in global climate models that do not include interactive chemistry. A multiple linear regression analysis of SAGE I+II satellite observations and polar ozonesonde measurements is used for the stratospheric zonal mean dataset during the well-observed period from 1979 to 2009. In addition to terms describing the mean annual cycle, the regression includes terms representing equivalent effective stratospheric chlorine (EESC) and the 11-yr solar cycle variability. The EESC regression fit coefficients, together with pre-1979 EESC values, are used to extrapolate the stratospheric ozone time series backward to 1850. While a similar procedure could be used to extrapolate into the future, coupled chemistry climate model (CCM) simulations indicate that future stratospheric ozone abundances are likely to be significantly affected by climate change, and capturing such effects through a regression model approach is not feasible. Therefore, the stratospheric ozone dataset is extended into the future (merged in 2009) with multimodel mean projections from 13 CCMs that performed a simulation until 2099 under the SRES (Special Report on Emission Scenarios) A1B greenhouse gas scenario and the A1 adjusted halogen scenario in the second round of the Chemistry-Climate Model Validation (CCMVal-2) Activity. The stratospheric zonal mean ozone time series is merged with a three-dimensional tropospheric data set extracted from simulations of the past by two CCMs (CAM3.5 and GISSPUCCINI)and of the future by one CCM (CAM3.5). The future tropospheric ozone time series continues the historical CAM3.5 simulation until 2099 following the four different Representative Concentration Pathways (RCPs). Generally good agreement is found between the historical segment of the ozone database and satellite observations, although it should be noted that total column ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23Wm−2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of −0.08Wm−2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of −0.05Wm−2, but which is within the stated range of −0.15 to +0.05Wm−2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1Wm−2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in netCDF Climate and Forecast (CF) Metadata Convention at the PCMDI website (http://cmip-pcmdi.llnl.gov/).
Resumo:
The authors identified several specific problems with the measurement of achievement goals in the current literature and illustrated these problems, focusing primarily on A. J. Elliot and H. A. McGregor's (2001) Achievement Goal Questionnaire (AGQ). They attended to these problems by creating the AGQ-Revised and conducting a study that examined the measure's structural validity and predictive utility with 229 (76 male, 150 female, 3 unspecified) undergraduates. The hypothesized factor and dimensional structures of the measure were confirmed and shown to be superior to a host of alternatives. The predictions were nearly uniformly supported with regard to both the antecedents (need for achievement and fear of failure) and consequences (intrinsic motivation and exam performance) of the 4 achievement goals. In discussing their work, the authors highlight the importance and value of additional precision in the area of achievement goal measurement. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)