939 resultados para Measurement error models
Resumo:
We propose a new method, based on inertial sensors, to automatically measure at high frequency the durations of the main phases of ski jumping (i.e. take-off release, take-off, and early flight). The kinematics of the ski jumping movement were recorded by four inertial sensors, attached to the thigh and shank of junior athletes, for 40 jumps performed during indoor conditions and 36 jumps in field conditions. An algorithm was designed to detect temporal events from the recorded signals and to estimate the duration of each phase. These durations were evaluated against a reference camera-based motion capture system and by trainers conducting video observations. The precision for the take-off release and take-off durations (indoor < 39 ms, outdoor = 27 ms) can be considered technically valid for performance assessment. The errors for early flight duration (indoor = 22 ms, outdoor = 119 ms) were comparable to the trainers' variability and should be interpreted with caution. No significant changes in the error were noted between indoor and outdoor conditions, and individual jumping technique did not influence the error of take-off release and take-off. Therefore, the proposed system can provide valuable information for performance evaluation of ski jumpers during training sessions.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.
Resumo:
Digital holographic microscopy (DHM) allows optical-path-difference (OPD) measurements with nanometric accuracy. OPD induced by transparent cells depends on both the refractive index (RI) of cells and their morphology. This Letter presents a dual-wavelength DHM that allows us to separately measure both the RI and the cellular thickness by exploiting an enhanced dispersion of the perfusion medium achieved by the utilization of an extracellular dye. The two wavelengths are chosen in the vicinity of the absorption peak of the dye, where the absorption is accompanied by a significant variation of the RI as a function of the wavelength.
Resumo:
Entrevistant infants pre-escolars víctimes d’abús sexual i/o maltractament familiar: eficàcia dels models d’entrevista forense Entrevistar infants en edat preescolar que han viscut una situació traumàtica és una tasca complexa que dins l’avaluació psicològica forense necessita d’un protocol perfectament delimitat, clar i temporalitzat. Per això, s’han seleccionat 3 protocols d’entrevista: el Protocol de Menors (PM) de Bull i Birch, el model del National Institute for Children Development (NICHD) de Michel Lamb, a partir del qual es va desenvolupar l’EASI (Evaluación del Abuso Sexual Infantojuvenil) i l’Entrevista Cognitiva (EC) de Fisher i Geiselman. La hipòtesi de partida vol comprovar si els anteriors models permeten obtenir volums informatius diferents en infants preescolars. Conseqüentment, els objectius han estat determinar quin dels models d’entrevista permet obtenir un volum informatiu amb més precisions i menys errors, dissenyar un model d’entrevista propi i consensuar aquest model. En el treball s’afegeixen esquemes pràctics que facilitin l’obertura, desenvolupament i tancament de l’entrevista forense. La metodologia ha reproduït el binomi infant - esdeveniment traumàtic, mitjançant la visualització i l’explicació d’un fet emocionalment significatiu amb facilitat per identificar-se: l’accident en bicicleta d’un infant que cau, es fa mal, sagna i el seu pare el cura. A partir d’aquí, hem entrevistat 135 infants de P3, P4 i P5, mitjançant els 3 models d’entrevista referits, enfrontant-los a una demanda específica: recordar i narrar aquest esdeveniment. S’ha conclòs que el nivell de record correcte, quan s’utilitza un model d’entrevista adequat amb els infants en edat preescolar, oscil•la entre el 70-90%, fet que permet defensar la confiança en els records dels infants. Es constata que el percentatge d’emissions incorrectes dels infants en edat preescolar és mínim, al voltant d’un 5-6%. L’estudi remarca la necessitat d’establir perfectament les regles de l’entrevista i, per últim, en destaca la ineficàcia de les tècniques de memòria de l’entrevista cognitiva en els infants de P3 i P4. En els de P5 es comencen a veure beneficis gràcies a la tècnica de la reinstauració contextual (RC), estant les altres tècniques fora de la comprensió i utilització dels infants d’aquestes edats. Interviewing preschoolers victims of sexual abuse and/or domestic abuse: Effectiveness of forensic interviews models 135 preschool children were interviewed with 3 different interview models in order to remember a significant emotional event. Authors conclude that the correct recall of children ranging from 70-90% and the percentage of error messages is 5-6%. It is necessary to fully establish the rules of the interview. The present research highlights the effectiveness of the cognitive interview techniques in children from P3 and P4. Entrevistando niños preescolares víctimas de abuso sexual y/o maltrato familiar: eficacia de los modelos de entrevista forense Se han entrevistado 135 niños preescolares con 3 modelos de entrevista diferentes para recordar un hecho emocionalmente significativo. Se concluye que el recuerdo correcto de los niños oscila entre el 70-90% y el porcentaje de errores de mensajes es del 5-6%. El estudio remarca la necesidad de establecer perfectamente las reglas de la entrevista y se destaca la ineficacia de las técnicas de la entrevista cognitiva en los niños de P3 y P4.
Resumo:
This study aimed to design and validate the measurement of ankle kinetics (force, moment, and power) during consecutive gait cycles and in the field using an ambulatory system. An ambulatory system consisting of plantar pressure insole and inertial sensors (3D gyroscopes and 3D accelerometers) on foot and shank was used. To test this system, 12 patients and 10 healthy elderly subjects wore shoes embedding this system and walked many times across a gait lab including a force-plate surrounded by seven cameras considered as the reference system. Then, the participants walked two 50-meter trials where only the ambulatory system was used. Ankle force components and sagittal moment of ankle measured by ambulatory system showed correlation coefficient (R) and normalized RMS error (NRMSE) of more than 0.94 and less than 13% in comparison with the references system for both patients and healthy subjects. Transverse moment of ankle and ankle power showed R>0.85 and NRMSE<23%. These parameters also showed high repeatability (CMC>0.7). In contrast, the ankle coronal moment of ankle demonstrated high error and lower repeatability. Except for ankle coronal moment, the kinetic features obtained by the ambulatory system could distinguish the patients with ankle osteoarthritis from healthy subjects when measured in 50-meter trials. The proposed ambulatory system can be easily accessible in most clinics and could assess main ankle kinetics quantities with acceptable error and repeatability for clinical evaluations. This system is therefore suggested for field measurement in clinical applications.
Resumo:
Solving multi-stage oligopoly models by backward induction can easily become a com- plex task when rms are multi-product and demands are derived from a nested logit frame- work. This paper shows that under the assumption that within-segment rm shares are equal across segments, the analytical expression for equilibrium pro ts can be substantially simpli ed. The size of the error arising when this condition does not hold perfectly is also computed. Through numerical examples, it is shown that the error is rather small in general. Therefore, using this assumption allows to gain analytical tractability in a class of models that has been used to approach relevant policy questions, such as for example rm entry in an industry or the relation between competition and location. The simplifying approach proposed in this paper is aimed at helping improving these type of models for reaching more accurate recommendations.
Resumo:
Predictive species distribution modelling (SDM) has become an essential tool in biodiversity conservation and management. The choice of grain size (resolution) of environmental layers used in modelling is one important factor that may affect predictions. We applied 10 distinct modelling techniques to presence-only data for 50 species in five different regions, to test whether: (1) a 10-fold coarsening of resolution affects predictive performance of SDMs, and (2) any observed effects are dependent on the type of region, modelling technique, or species considered. Results show that a 10 times change in grain size does not severely affect predictions from species distribution models. The overall trend is towards degradation of model performance, but improvement can also be observed. Changing grain size does not equally affect models across regions, techniques, and species types. The strongest effect is on regions and species types, with tree species in the data sets (regions) with highest locational accuracy being most affected. Changing grain size had little influence on the ranking of techniques: boosted regression trees remain best at both resolutions. The number of occurrences used for model training had an important effect, with larger sample sizes resulting in better models, which tended to be more sensitive to grain. Effect of grain change was only noticeable for models reaching sufficient performance and/or with initial data that have an intrinsic error smaller than the coarser grain size.
Resumo:
A survey was undertaken among Swiss occupational health and safety specialists in 2004 to identify uses, difficulties, and possible developments of exposure models. Occupational hygienists (121), occupational physicians (169), and safety specialists (95) were surveyed with an in depth questionnaire. Results obtained indicate that models are not used very much in practice in Switzerland and are reserved to research groups focusing on specific topics. However, various determinants of exposure are often considered important by professionals (emission rate, work activity), and in some cases recorded and used (room parameters, operator activity). These parameters cannot be directly included in present models. Nevertheless, more than half of the occupational hygienists think that it is important to develop quantitative exposure models. Looking at research institutions, there is, however, a big interest in the use of models to solve problems which are difficult to address with direct measurements; i. e. retrospective exposure assessment for specific clinical cases and prospective evaluation for new situations or estimation of the effect of selected parameters. In a recent study about cases of acutepulmonary toxicity following water proofing spray exposure, exposure models have been used to reconstruct exposure of a group of patients. Finally, in the context of exposure prediction, it is also important to report about a measurement database existing in Switzerland since 1991. [Authors]
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
The aim of our study was to provide an innovative headspace-gas chromatography-mass spectrometry (HS-GC-MS) method applicable for the routine determination of blood CO concentration in forensic toxicology laboratories. The main drawback of the GC/MS methods discussed in literature for CO measurement is the absence of a specific CO internal standard necessary for performing quantification. Even if stable isotope of CO is commercially available in the gaseous state, it is essential to develop a safer method to limit the manipulation of gaseous CO and to precisely control the injected amount of CO for spiking and calibration. To avoid the manipulation of a stable isotope-labeled gas, we have chosen to generate in a vial in situ, an internal labeled standard gas ((13)CO) formed by the reaction of labeled formic acid formic acid (H(13)COOH) with sulfuric acid. As sulfuric acid can also be employed to liberate the CO reagent from whole blood, the procedure allows for the liberation of CO simultaneously with the generation of (13)CO. This method allows for precise measurement of blood CO concentrations from a small amount of blood (10 μL). Finally, this method was applied to measure the CO concentration of intoxicated human blood samples from autopsies.
Resumo:
BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Resumo:
The basis set superposition error-free second-order MØller-Plesset perturbation theory of intermolecular interactions was studied. The difficulties of the counterpoise (CP) correction in open-shell systems were also discussed. The calculations were performed by a program which was used for testing the new variants of the theory. It was shown that the CP correction for the diabatic surfaces should be preferred to the adiabatic ones
Resumo:
In vivo localized proton magnetic resonance spectroscopy (1H MRS) became a powerful and unique technique to non-invasively investigate brain metabolism of rodents and humans. The main goal of 1H MRS is the reliable quantification of concentrations of metabolites (neurochemical profile) in a well-defined region of the brain. The availability of very high magnetic field strengths combined with the possibility of acquiring spectra at very short echo time have dramatically increased the number of constituents of the neurochemical profile. The quantification of spectra measured at short echo times is complicated by the presence of macromolecule signals of particular importance at high magnetic fields. An error in the macromolecule estimation can lead to substantial errors in the obtained neurochemical profile. The purpose of the present review is to overview methods of high field 1H MRS with a focus on the metabolite quantification, in particular in handling signals of macromolecules. Three main approaches of handling signals of macromolecules are described, namely mathematical estimation of macromolecules, measurement of macromolecules in vivo, and direct acquisition of the in vivo spectrum without the contribution of macromolecules.
Resumo:
Background: The COSMIN checklist (COnsensus-based Standards for the selection of health status Measurement INstruments) was developed in an international Delphi study to evaluate the methodological quality of studies on measurement properties of health-related patient reported outcomes (HR-PROs). In this paper, we explain our choices for the design requirements and preferred statistical methods for which no evidence is available in the literature or on which the Delphi panel members had substantial discussion. Methods: The issues described in this paper are a reflection of the Delphi process in which 43 panel members participated. Results: The topics discussed are internal consistency (relevance for reflective and formative models, and distinction with unidimensionality), content validity (judging relevance and comprehensiveness), hypotheses testing as an aspect of construct validity (specificity of hypotheses), criterion validity (relevance for PROs), and responsiveness (concept and relation to validity, and (in) appropriate measures).Conclusions: We expect that this paper will contribute to a better understanding of the rationale behind the items, thereby enhancing the acceptance and use of the COSMIN checklist.