948 resultados para Data reliability
Resumo:
In the framework of a global investigation of the Spanish natural analogues of CO2 storage and leakage, four selected sites from the Mazarrón?Gañuelas Tertiary Basin (Murcia, Spain) were studied for computing the diffuse soil CO2 flux, by using the accumulation chamber method. The Basin is characterized by the presence of a deep, saline, thermal (?47 ?C) CO2-rich aquifer intersected by two deep geothermal exploration wells named ?El Saladillo? (535 m) and ?El Reventón? (710 m). The CO2 flux data were processed by means of a graphical?statistical method, kriging estimation and sequential Gaussian simulation algorithms. The results have allowed concluding that the Tertiary marly cap-rock of this CO2-rich aquifer acts as a very effective sealing, preventing any CO2 leak from this natural CO2 storage site, being therefore an excellent scenario to guarantee, by analogy, the safety of a CO2 storage.
Resumo:
Purpose: To provide for the basis for collecting strength training data using a rigorously validated injury report form. Methods: A group of specialist designed a questionnaire of 45 item grouped into 4 dimensions. Six stages were used to assess face, content, and criterion validity of the weight training injury report form. A 13 members panel assessed the form for face validity, and an expert panel assessed it for content and criterion validity. Panel members were consulted until consensus was reached. A yardstick developed by an expert panel using Intraclass correlation technique was used to assess the reability of the form. Test-retest reliability was assessed with the intraclass correlation coefficient (ICC).The strength training injury report form was developed, and the face, content, and criterion validity successfully assessed. A six step protocol to create a yardstick was also developed to assist in the validation process. Both inter-rater and intra rater reliability results indicated a 98% agreement. Inter-rater reliability agreement of 98% for three injuries. Results: The Cronbach?s alpha of the questionnaire was 0.944 (pmenor que0.01) and the ICC of the entire questionnaire was 0.894 (pmenor que0.01). Conclusion: The questionnaire gathers together enough psychometric properties to be considered a valid and reliable tool for register injury data in strength training, and providing researchers with a basis for future studies in this area. Key Words: data collection; validation; injury prevention; strength training
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
We present the results of the analysis of satellite imagery to study light pollution in Spain. Both calibrated and non-calibrated DMSP-OLS images were used. We describe the method to scale the non-calibrated DMSP-OLS images which allows us to use differential photometry techniques in order to study the evolution of the light pollution. Population data and DMSP-OLS satellite calibrated images for the year 2006 were compared to test the reliability of official statistics in public lighting consumption. We found a relationship between the population and the energy consumption which is valid for several regions. Finally the true evolution of the electricity consumption for street lighting in Spain from 1992 to 2010 was derived; it has been doubled in the last 18 years in most of the provinces. (C) 2013 Elsevier Ltd. All rights reserved,
Resumo:
Background: Only a minority of infants are exclusively breastfed for the recommended 6 months postpartum. Breast-feeding self-efficacy is a mother's confidence in her ability to breastfeed and is predictive of breastfeeding behaviors. The Prenatal Breast-feeding Self-efficacy Scale (PBSES) was developed among English-speaking mothers to measure breastfeeding self-efficacy before delivery. Objectives: To translate the PBSES into Spanish and assess its psychometric properties. Design: Reliability and validity assessment. Setting: A public hospital in Yecla, Spain. Participants: A convenience sample of 234 pregnant women in their third trimester of pregnancy. Methods: The PBSES was translated into Spanish using forward and back translation. A battery of self-administered questionnaires was completed by participants, including a questionnaire on sociodemographic variables, breastfeeding experience and intention, as well as the Spanish version of the PBSES. Also, data on exclusive breastfeeding at discharge were collected from hospital database. Dimensional structure, internal consistency and construct validity of the Spanish version of PBSES were assessed. Results: Confirmatory factor analysis suggested the presence of one construct, self-efficacy, with four dimensions or latent variables. Cronbach's alpha coefficient for internal consistency was 0.91. Response patterns based on decision to breastfeed during pregnancy provided evidence of construct validity. In addition, the scores of the Spanish version of the PBSES significantly predicted exclusive breastfeeding at discharge. Conclusions: The Spanish version of PBSES shows evidences of reliability, and contrasting group and predictive validity. Confirmatory factor analysis indicated marginal fit and further studies are needed to provide new evidence on the structure of the scale. The Spanish version of the PBSES can be considered a reliable measure and shows validity evidences.
Resumo:
The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.
Resumo:
This paper reports the results of two studies. The purpose of the first study was to determine if lifestyle variables and past involvement in physical activity was related to current activity levels in master athletes and sedentary older adults. Retrospective interviews were conducted with 12 master athletes and 12 sedentary older adults. Results demonstrated that education level, spouse participation, smoking, and recent physical activity levels were indicators of current involvement. The second study investigated the reliability of the data collected in the retrospective interviews. Similar to results with younger samples, we confirm that lifestyle variables and physical activity involvement could be accurately recalled for a period of 25 years, making this tool a useful addition for the study of physical activity in older adults.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
DeKalb County School System, Decatur, Ga.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Most of the modem developments with classification trees are aimed at improving their predictive capacity. This article considers a curiously neglected aspect of classification trees, namely the reliability of predictions that come from a given classification tree. In the sense that a node of a tree represents a point in the predictor space in the limit, the aim of this article is the development of localized assessment of the reliability of prediction rules. A classification tree may be used either to provide a probability forecast, where for each node the membership probabilities for each class constitutes the prediction, or a true classification where each new observation is predictively assigned to a unique class. Correspondingly, two types of reliability measure will be derived-namely, prediction reliability and classification reliability. We use bootstrapping methods as the main tool to construct these measures. We also provide a suite of graphical displays by which they may be easily appreciated. In addition to providing some estimate of the reliability of specific forecasts of each type, these measures can also be used to guide future data collection to improve the effectiveness of the tree model. The motivating example we give has a binary response, namely the presence or absence of a species of Eucalypt, Eucalyptus cloeziana, at a given sampling location in response to a suite of environmental covariates, (although the methods are not restricted to binary response data).
Resumo:
The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to.. the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.
Resumo:
The H I Parkes All Sky Survey (HIPASS) is a blind extragalactic H I 21-cm emission-line survey covering the whole southern sky from declination -90degrees to +25degrees. The HIPASS catalogue (HICAT), containing 4315 H I-selected galaxies from the region south of declination +2degrees, is presented in Meyer et al. (Paper I). This paper describes in detail the completeness and reliability of HICAT, which are calculated from the recovery rate of synthetic sources and follow-up observations, respectively. HICAT is found to be 99 per cent complete at a peak flux of 84 mJy and an integrated flux of 9.4 Jy km. s(-1). The overall reliability is 95 per cent, but rises to 99 per cent for sources with peak fluxes >58 mJy or integrated flux >8.2 Jy km s(-1). Expressions are derived for the uncertainties on the most important HICAT parameters: peak flux, integrated flux, velocity width and recessional velocity. The errors on HICAT parameters are dominated by the noise in the HIPASS data, rather than by the parametrization procedure.
Resumo:
We have used an animal model to test the reliability of a new portable continuous-wave Doppler ultrasonic cardiac output monitor, the USCOM. In six anesthetized dogs, cardiac output was measured with a high-precision transit time ultrasonic flowprobe placed on the ascending aorta. The dogs' cardiac output was increased with a dopamine infusion (0-15 mug (.) kg(-1) (.) min(-1)). Simultaneous flowprobe and USCOM cardiac output measurements were made. Up to 64 pairs of readings were collected from each dog. Data were compared by using the Bland and Altman plot method and Lin's concordance correlation coefficient. A total of 319 sets of paired readings were collected. The mean (+/-SD) cardiac output was 2.62 +/- 1.04 L/min, and readings ranged from 0.79 to 5.73 L/min. The mean bias between the 2 sets of readings was -0.01 L/min, with limits of agreement (95% confidence intervals) of -0.34 to 0.31 L/min. This represents a 13% error. In five of six dogs, there was a high degree of concordance, or agreement, between the 2 methods, with coefficients >0.9. The USCOM provided reliable measurements of cardiac output over a wide range of values. Clinical trials are needed to validate the device in humans.