951 resultados para measurement data
Resumo:
One in a series of six data briefings based on regional-level analysis of data from the National Child Measurement Programme (NCMP) undertaken by the National Obesity Observatory (NOO). The briefings are intended to complement the headline results for the region published in January 2010, at Quick Link 20510.
Resumo:
One in a series of six data briefings based on regional-level analysis of data from the National Child Measurement Programme (NCMP) undertaken by the National Obesity Observatory (NOO). The briefings are intended to complement the headline results for the region published in January 2010, at Quick Link 20510.
Resumo:
One in a series of six data briefings based on regional-level analysis of data from the National Child Measurement Programme (NCMP) undertaken by the National Obesity Observatory (NOO). The briefings are intended to complement the headline results for the region published in January 2010, at Quick Link 20510.
Resumo:
The paper describes how to integrate audience measurement and site visibility as the main research approaches in outdoor advertising research in a single concept. Details are portrayed on how GPS is used on a large scale in Switzerland for mobility analysis and audience measurement. Furthermore, the development of a software solution is introduced that allows the integration of all mobility data and poster location information. Finally a model and its results is presented for the calculation of coverage of individual poster campaigns and for the calculation of the number of contacts generated by each billboard.
Resumo:
In this paper we examine the problem of compositional data from a different startingpoint. Chemical compositional data, as used in provenance studies on archaeologicalmaterials, will be approached from the measurement theory. The results will show, in avery intuitive way that chemical data can only be treated by using the approachdeveloped for compositional data. It will be shown that compositional data analysis is aparticular case in projective geometry, when the projective coordinates are in thepositive orthant, and they have the properties of logarithmic interval metrics. Moreover,it will be shown that this approach can be extended to a very large number ofapplications, including shape analysis. This will be exemplified with a case study inarchitecture of Early Christian churches dated back to the 5th-7th centuries AD
Resumo:
The integration of geophysical data into the subsurface characterization problem has been shown in many cases to significantly improve hydrological knowledge by providing information at spatial scales and locations that is unattainable using conventional hydrological measurement techniques. The investigation of exactly how much benefit can be brought by geophysical data in terms of its effect on hydrological predictions, however, has received considerably less attention in the literature. Here, we examine the potential hydrological benefits brought by a recently introduced simulated annealing (SA) conditional stochastic simulation method designed for the assimilation of diverse hydrogeophysical data sets. We consider the specific case of integrating crosshole ground-penetrating radar (GPR) and borehole porosity log data to characterize the porosity distribution in saturated heterogeneous aquifers. In many cases, porosity is linked to hydraulic conductivity and thus to flow and transport behavior. To perform our evaluation, we first generate a number of synthetic porosity fields exhibiting varying degrees of spatial continuity and structural complexity. Next, we simulate the collection of crosshole GPR data between several boreholes in these fields, and the collection of porosity log data at the borehole locations. The inverted GPR data, together with the porosity logs, are then used to reconstruct the porosity field using the SA-based method, along with a number of other more elementary approaches. Assuming that the grid-cell-scale relationship between porosity and hydraulic conductivity is unique and known, the porosity realizations are then used in groundwater flow and contaminant transport simulations to assess the benefits and limitations of the different approaches.
Resumo:
The main instrument used in psychological measurement is the self-report questionnaire. One of its majordrawbacks however is its susceptibility to response biases. A known strategy to control these biases hasbeen the use of so-called ipsative items. Ipsative items are items that require the respondent to makebetween-scale comparisons within each item. The selected option determines to which scale the weight ofthe answer is attributed. Consequently in questionnaires only consisting of ipsative items everyrespondent is allotted an equal amount, i.e. the total score, that each can distribute differently over thescales. Therefore this type of response format yields data that can be considered compositional from itsinception.Methodological oriented psychologists have heavily criticized this type of item format, since the resultingdata is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians havekept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate bothpositions and addresses the similarities and differences between the two data collection methods. Theultimate objective is to formulate a guideline when to use which type of item format.The comparison is based on data obtained with both an ipsative and normative version of threepsychological questionnaires, which were administered to 502 first-year students in psychology accordingto a balanced within-subjects design. Previous research only compared the direct ipsative scale scoreswith the derived ipsative scale scores. The use of compositional data analysis techniques also enables oneto compare derived normative score ratios with direct normative score ratios. The addition of the secondcomparison not only offers the advantage of a better-balanced research strategy. In principle it also allowsfor parametric testing in the evaluation
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
A workshop was convened to discuss best practices for the assessment of drug-induced liver injury (DILI) in clinical trials. In a breakout session, workshop attendees discussed necessary data elements and standards for the accurate measurement of DILI risk associated with new therapeutic agents in clinical trials. There was agreement that in order to achieve this goal the systematic acquisition of protocol-specified clinical measures and lab specimens from all study subjects is crucial. In addition, standard DILI terms that address the diverse clinical and pathologic signatures of DILI were considered essential. There was a strong consensus that clinical and lab analyses necessary for the evaluation of cases of acute liver injury should be consistent with the US Food and Drug Administration (FDA) guidance on pre-marketing risk assessment of DILI in clinical trials issued in 2009. A recommendation that liver injury case review and management be guided by clinicians with hepatologic expertise was made. Of note, there was agreement that emerging DILI signals should prompt the systematic collection of candidate pharmacogenomic, proteomic and/or metabonomic biomarkers from all study subjects. The use of emerging standardized clinical terminology, CRFs and graphic tools for data review to enable harmonization across clinical trials was strongly encouraged. Many of the recommendations made in the breakout session are in alignment with those made in the other parallel sessions on methodology to assess clinical liver safety data, causality assessment for suspected DILI, and liver safety assessment in special populations (hepatitis B, C, and oncology trials). Nonetheless, a few outstanding issues remain for future consideration.
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
Objective: To determine methadone plasma trough and peak concentrations in patients presenting opiate withdrawal symptoms after introduction of nevirapine or efavirenz. To describe the disappearance of these symptoms after methadone titration based on plasma concentrations rather than on the symptoms. Methods: Nine patients undergoing highly active antiretroviral therapy (HAART) and either nevirapine or efavirenz treatment were monitored daily for opiate withdrawal in a specialized drug addiction center. Methadone dose was titrated daily, and plasma concentrations were measured. The data are retrospective (case series). Results: Several patients complained of symptoms such as nausea, vomiting, accelerated intestinal transit, or insomnia. Even after methadone titration based on clinical symptoms, patients and health-care providers trained in infectious disease did not classify these as withdrawal symptoms and considered them as the side effects of HAART or anxiety. Methadone plasma trough concentration showed low levels of (R)- and (R,S)-methadone. Further methadone dose adjustment according to plasma level resulted in the disappearance of these withdrawal symptoms. The daily methadone dose was split when the peak/trough (R)-methadone ratio was more than 2. Conclusions: When introducing efavirenz or nevirapine to patients undergoing methadone treatment, withdrawal symptoms should be monitored, especially those such as insomnia, vomiting, or nausea. Methadone plasma trough and peak measurements can be of value in preventing unnecessary side effects of HAART.
Resumo:
This paper explores biases in the elicitation of utilities under risk and the contribution that generalizations of expected utility can make to the resolution of these biases. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studies, prospect theory was most consistent with our data. The main improvement of prospect theory over expected utility was in comparisons between a riskless and a risky prospect(riskless-risk methods). We observed no improvement over expected utility in comparisons between two risky prospects (risk-risk methods). An explanation why we found no improvement of prospect theory over expected utility in risk-risk methods may be that there was less overweighting of small probabilities in our study than has commonly been observed.
Resumo:
A method to evaluate cyclical models not requiring knowledge of the DGP and the exact specificationof the aggregate decision rules is proposed. We derive robust restrictions in a class of models; use someto identify structural shocks in the data and others to evaluate the class or contrast sub-models. Theapproach has good properties, even in small samples, and when the class of models is misspecified. Themethod is used to sort out the relevance of a certain friction (the presence of rule-of-thumb consumers)in a standard class of models.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.