100 resultados para Calibration estimators
Resumo:
The aim of our study was to provide an innovative headspace-gas chromatography-mass spectrometry (HS-GC-MS) method applicable for the routine determination of blood CO concentration in forensic toxicology laboratories. The main drawback of the GC/MS methods discussed in literature for CO measurement is the absence of a specific CO internal standard necessary for performing quantification. Even if stable isotope of CO is commercially available in the gaseous state, it is essential to develop a safer method to limit the manipulation of gaseous CO and to precisely control the injected amount of CO for spiking and calibration. To avoid the manipulation of a stable isotope-labeled gas, we have chosen to generate in a vial in situ, an internal labeled standard gas ((13)CO) formed by the reaction of labeled formic acid formic acid (H(13)COOH) with sulfuric acid. As sulfuric acid can also be employed to liberate the CO reagent from whole blood, the procedure allows for the liberation of CO simultaneously with the generation of (13)CO. This method allows for precise measurement of blood CO concentrations from a small amount of blood (10 μL). Finally, this method was applied to measure the CO concentration of intoxicated human blood samples from autopsies.
Resumo:
Thanks to decades of research, gait analysis has become an efficient tool. However, mainly due to the price of the motion capture systems, standard gait laboratories have the capability to measure only a few consecutive steps of ground walking. Recently, wearable systems were proposed to measure human motion without volume limitation. Although accurate, these systems are incompatible with most of existing calibration procedures and several years of research will be necessary for their validation. A new approach consisting of using a stationary system with a small capture volume for the calibration procedure and then to measure gait using a wearable system could be very advantageous. It could benefit from the knowledge related to stationary systems, allow long distance monitoring and provide new descriptive parameters. The aim of this study was to demonstrate the potential of this approach. Thus, a combined system was proposed to measure the 3D lower body joints angles and segmental angular velocities. It was then assessed in terms of reliability towards the calibration procedure, repeatability and concurrent validity. The dispersion of the joint angles across calibrations was comparable to those of stationary systems and good reliability was obtained for the angular velocities. The repeatability results confirmed that mean cycle kinematics of long distance walks could be used for subjects' comparison and pointed out an interest for the variability between cycles. Finally, kinematics differences were observed between participants with different ankle conditions. In conclusion, this study demonstrated the potential of a mixed approach for human movement analysis.
Resumo:
The aims of this study were to assess whether high-mobility group box-1 protein can be determined in biological fluids collected during autopsy and evaluate the diagnostic potential of high-mobility group box-1 protein in identifying sepsis-related deaths. High-mobility group box-1 protein was measured in serum collected during hospitalization as well as in undiluted and diluted postmortem serum and pericardial fluid collected during autopsy in a group of sepsis-related deaths and control cases with noninfectious causes of death. Inclusion criteria consisted of full biological sample availability and postmortem interval not exceeding 6h. The preliminary results indicate that high-mobility group box-1 protein levels markedly increase after death. Concentrations beyond the upper limit of the calibration curve were obtained in undiluted postmortem serum in septic and traumatic control cases. In pericardial fluid, concentrations beyond the upper limit of the calibration curve were found in all cases. These findings suggest that the diagnostic potential of high-mobility group box-1 protein in the postmortem setting is extremely limited due to molecule release into the bloodstream after death, rendering antemortem levels difficult or impossible to estimate even after sample dilution.
Resumo:
The aim of the present study was to develop a short form of the Zuckerman-Kuhlman Personality Questionnaire (ZKPQ) with acceptable psychometric properties in four languages: English (United States), French (Switzerland), German (Germany), and Spanish (Spain). The total sample (N = 4,621) was randomly divided into calibration and validation samples. An exploratory factor analysis was conducted in the calibration sample. Eighty items, with loadings equal or higher than 0.30 on their own factor and lower on the remaining factors, were retained. A confirmatory factor analysis was performed over the survival items in the validation sample in order to select the best 10 items for each scale. This short version (named ZKPQ-50-CC) presents psychometric properties strongly similar to the original version in the four countries. Moreover, the factor structure are near equivalent across the four countries since the congruence indices were all higher than 0.90. It is concluded that the ZKPQ-50-CC presented a high cross-language replicability, and it could be an useful questionnaire that may be used for personality research.
Resumo:
Raman spectroscopy has become an attractive tool for the analysis of pharmaceutical solid dosage forms. In the present study it is used to ensure the identity of tablets. The two main applications of this method are release of final products in quality control and detection of counterfeits. Twenty-five product families of tablets have been included in the spectral library and a non-linear classification method, the Support Vector Machines (SVMs), has been employed. Two calibrations have been developed in cascade: the first one identifies the product family while the second one specifies the formulation. A product family comprises different formulations that have the same active pharmaceutical ingredient (API) but in a different amount. Once the tablets have been classified by the SVM model, API peaks detection and correlation are applied in order to have a specific method for the identification and allow in the future to discriminate counterfeits from genuine products. This calibration strategy enables the identification of 25 product families without error and in the absence of prior information about the sample. Raman spectroscopy coupled with chemometrics is therefore a fast and accurate tool for the identification of pharmaceutical tablets.
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
The analysis by electron microprobe allows the evaluation of the quantity of Fe3+ ih spinels with considerable errors. The use of a correction equation which is based on a the calibration of analyses with an electron microprobe in relation to those carried out with Mossbauer spectroscopy gives more precise evaluations of Fe3+. In fact, it allows a calculation of the oxygen fugacity in peridotitic xenoliths. The obtained results show that peridotites of the French Central Massif crystallised under oxygen fugacities which were higher than those of the Moroccan Anti-Atlas.
Resumo:
Liquid scintillation counting (LSC) is one of the most widely used methods for determining the activity of 241Pu. One of the main challenges of this counting method is the efficiency calibration of the system for the low beta energies of 241Pu (Emax = 20.8 keV). In this paper we compare the two most frequently used methods, the CIEMAT/NIST efficiency tracing (CNET) method and the experimental quench correction curve method. Both methods proved to be reliable, and agree within their uncertainties, for the expected quenching conditions of the sources.
Resumo:
The use of observer-rated scales requires that raters be trained until they have become reliable in using the scales. However, few studies properly report how training in using a given rating scale is conducted or indeed how it should be conducted. This study examined progress in interrater reliability over 6 months of training with two observer-rated scales, the Cognitive Errors Rating Scale and the Coping Action Patterns Rating Scale. The evolution of the intraclass correlation coefficients was modeled using hierarchical linear modeling. Results showed an overall training effect as well as effects of the basic training phase and of the rater calibration phase, the latter being smaller than the former. The results are discussed in terms of implications for rater training in psychotherapy research.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
A first assessment of debris flow susceptibility at a large scale was performed along the National Road N7, Argentina. Numerous catchments are prone to debris flows and likely to endanger the road-users. A 1:50,000 susceptibility map was created. The use of a DEM (grid 30 m) associated to three complementary criteria (slope, contributing area, curvature) allowed the identification of potential source areas. The debris flow spreading was estimated using a process- and GISbased model (Flow-R) based on basic probabilistic and energy calculations. The best-fit values for the coefficient of friction and the mass-to-drag ratio of the PCM model were found to be ? = 0.02 and M/D = 180 and the resulting propagation on one of the calibration site was validated using the Coulomb friction model. The results are realistic and will be useful to determine which areas need to be prioritized for detailed studies.
Resumo:
An adaptation technique based on the synoptic atmospheric circulation to forecast local precipitation, namely the analogue method, has been implemented for the western Swiss Alps. During the calibration procedure, relevance maps were established for the geopotential height data. These maps highlight the locations were the synoptic circulation was found of interest for the precipitation forecasting at two rain gauge stations (Binn and Les Marécottes) that are located both in the alpine Rhône catchment, at a distance of about 100 km from each other. These two stations are sensitive to different atmospheric circulations. We have observed that the most relevant data for the analogue method can be found where specific atmospheric circulation patterns appear concomitantly with heavy precipitation events. Those skilled regions are coherent with the atmospheric flows illustrated, for example, by means of the back trajectories of air masses. Indeed, the circulation recurrently diverges from the climatology during days with strong precipitation on the southern part of the alpine Rhône catchment. We have found that for over 152 days with precipitation amount above 50 mm at the Binn station, only 3 did not show a trajectory of a southerly flow, meaning that such a circulation was present for 98% of the events. Time evolution of the relevance maps confirms that the atmospheric circulation variables have significantly better forecasting skills close to the precipitation period, and that it seems pointless for the analogue method to consider circulation information days before a precipitation event as a primary predictor. Even though the occurrence of some critical circulation patterns leading to heavy precipitation events can be detected by precursors at remote locations and 1 week ahead (Grazzini, 2007; Martius et al., 2008), time extrapolation by the analogue method seems to be rather poor. This would suggest, in accordance with previous studies (Obled et al., 2002; Bontron and Obled, 2005), that time extrapolation should be done by the Global Circulation Model, which can process atmospheric variables that can be used by the adaptation method.
Resumo:
Modern sonic logging tools designed for shallow environmental and engineering applications allow for P-wave phase velocity measurements over a wide frequency band. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and source frequencies ranging from approximately 1 to 30 kHz, the observable poro-elastic P-wave velocity dispersion is sufficiently pronounced to allow for reliable first-order estimations of the underlying permeability structure. These predictions have been tested on and verified for a surficial alluvial aquifer. Our results indicate that, even without any further calibration, the thus obtained permeability estimates as well as their variabilities within the pertinent lithological units are remarkably close to those expected based on the corresponding granulometric characteristics.
Resumo:
Bone ultrasound measures (QUSs) can assess fracture risk in the elderly. We compared three QUSs and their association with nonvertebral fracture history in 7562 Swiss women 70-80 years of age. The association between nonvertebral fracture was higher for heel than phalangeal QUS. INTRODUCTION: Because of the high morbidity and mortality associated with osteoporotic fractures, it is essential to detect subjects at risk for such fractures with screening methods. Because quantitative bone ultrasound (QUS) discriminated subjects with osteoporotic fractures from controls in several cross-sectional studies and predicted fractures in prospective studies, QUS could be more practical than DXA for screening. MATERIAL AND METHODS: This cross-sectional and retrospective multicenter (10 centers) study was performed to compare three QUSs (two heel ultrasounds: Achilles+ [GE-Lunar] and Sahara [Hologic]; the phalanges: ultrasound DBM sonic 1200 [IGEA]) for determining by logistic regression nonvertebral fracture odds ratio (OR) in a sample of 7562 Swiss women, 75.3 +/- 3.1 years of age. The two heel QUSs measured the broadband ultrasound attenuation (BUA) and the speed of sound (SOS). In addition, Achilles+ calculated the stiffness index (SI) and the Sahara calculated the quantitative ultrasound index (QUI) from BUA and SOS. The DBM sonic 1200 measured the amplitude-dependent SOS (AD-SOS). RESULTS: Eighty-six women had a history of a traumatic hip fracture after the age of 50, 1594 had a history of forearm fracture, and 2016 had other nonvertebral fractures. No fracture history was reported by 3866 women. Discrimination for hip fracture was higher than for the other nonvertebral fractures. The two heel QUSs had a significantly higher discrimination power than the QUSs of the phalanges, with standardized ORs, adjusted for age and body mass index, ranging from 2.1 to 2.7 (95% CI = 1.6, 3.5) compared with 1.4 (95% CI = 1.1, 1.7) for the AD-SOS of DBM sonic 1200. CONCLUSION: This study showed a high association between heel QUS and hip fracture history in elderly Swiss women. This could justify integration of QUS among screening strategies for identifying elderly women at risk for osteoporotic fractures.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.