983 resultados para Calibration estimators
Resumo:
Bioassays with bioreporter bacteria are usually calibrated with analyte solutions of known concentrations that are analysed along with the samples of interest. This is done as bioreporter output (the intensity of light, fluorescence or colour) does not only depend on the target concentration, but also on the incubation time and physiological activity of the cells in the assay. Comparing the bioreporter output with standardized colour tables in the field seems rather difficult and error-prone. A new approach to control assay variations and improve application ease could be an internal calibration based on the use of multiple bioreporter cell lines with drastically different reporter protein outputs at a given analyte concentration. To test this concept, different Escherichia coli-based bioreporter strains expressing either cytochrome c peroxidase (CCP, or CCP mutants) or β-galactosidase upon induction with arsenite were constructed. The reporter strains differed either in the catalytic activity of the reporter protein (for CCP) or in the rates of reporter protein synthesis (for β-galactosidase), which, indeed, resulted in output signals with different intensities at the same arsenite concentration. Hence, it was possible to use combinations of these cell lines to define arsenite concentration ranges at which none, one or more cell lines gave qualitative (yes/no) visible signals that were relatively independent of incubation time or bioreporter activity. The discriminated concentration ranges would fit very well with the current permissive (e.g. World Health Organization) levels of arsenite in drinking water (10 µg l−1).
Resumo:
The Highway Safety Manual is the national safety manual that provides quantitative methods for analyzing highway safety. The HSM presents crash modification factors related to work zone characteristics such as work zone duration and length. These crash modification factors were based on high-impact work zones in California. Therefore there was a need to use work zone and safety data from the Midwest to calibrate these crash modification factors for use in the Midwest. Almost 11,000 Missouri freeway work zones were analyzed to derive a representative and stratified sample of 162 work zones. The 162 work zones was more than four times the number of work zones used in the HSM. This dataset was used for modeling and testing crash modification factors applicable to the Midwest. The dataset contained work zones ranging from 0.76 mile to 9.24 miles and with durations from 16 days to 590 days. A combined fatal/injury/non-injury model produced a R2 fit of 0.9079 and a prediction slope of 0.963. The resulting crash modification factors of 1.01 for duration and 0.58 for length were smaller than the values in the HSM. Two practical application examples illustrate the use of the crash modification factors for comparing alternate work zone setups.
Resumo:
Based on results of an evaluation performed during the winter of 1985-86, six Troxler 3241-B Asphalt Content Gauges were purchased for District use in monitoring project asphalt contents. Use of these gauges will help reduce the need for chemical based extractions. Effective use of the gauges depends on the accurate preparation and transfer of project mix calibrations from the Central Lab to the Districts. The objective of this project was to evaluate the precision and accuracy of a gauge in determining asphalt contents and to develop a mix calibration transfer procedure for implementation during the 1987 construction. The first part of the study was accomplished by preparing mix calibrations in the Central Lab gauge and taking multiple measurements of a sample with known asphalt content. The second part was accomplished by preparing transfer pans, obtaining count data on the pans using each gauge, and transferring calibrations from one gauge to another through the use of calibration transfer equations. The transferred calibrations were tested by measuring samples with a known asphalt content. The study established that the Troxler 3241-B Asphalt Content Gauge yields results of acceptable accuracy and precision as evidenced by a standard deviation of 0.04% asphalt content on multiple measurements of the same sample. The calibration transfer procedure proved feasible and resulted in the calibration transfer portion of Materials I.M. 335 - Method of Test For Determining the Asphalt Content of Bituminous Mixtures by the Nuclear Method.
Resumo:
In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.
Resumo:
Microsatellite loci mutate at an extremely high rate and are generally thought to evolve through a stepwise mutation model. Several differentiation statistics taking into account the particular mutation scheme of the microsatellite have been proposed. The most commonly used is R(ST) which is independent of the mutation rate under a generalized stepwise mutation model. F(ST) and R(ST) are commonly reported in the literature, but often differ widely. Here we compare their statistical performances using individual-based simulations of a finite island model. The simulations were run under different levels of gene flow, mutation rates, population number and sizes. In addition to the per locus statistical properties, we compare two ways of combining R(ST) over loci. Our simulations show that even under a strict stepwise mutation model, no statistic is best overall. All estimators suffer to different extents from large bias and variance. While R(ST) better reflects population differentiation in populations characterized by very low gene-exchange, F(ST) gives better estimates in cases of high levels of gene flow. The number of loci sampled (12, 24, or 96) has only a minor effect on the relative performance of the estimators under study. For all estimators there is a striking effect of the number of samples, with the differentiation estimates showing very odd distributions for two samples.
Resumo:
This paper is concerned with the derivation of new estimators and performance bounds for the problem of timing estimation of (linearly) digitally modulated signals. The conditional maximum likelihood (CML) method is adopted, in contrast to the classical low-SNR unconditional ML (UML) formulationthat is systematically applied in the literature for the derivationof non-data-aided (NDA) timing-error-detectors (TEDs). A new CML TED is derived and proved to be self-noise free, in contrast to the conventional low-SNR-UML TED. In addition, the paper provides a derivation of the conditional Cramér–Rao Bound (CRB ), which is higher (less optimistic) than the modified CRB (MCRB)[which is only reached by decision-directed (DD) methods]. It is shown that the CRB is a lower bound on the asymptotic statisticalaccuracy of the set of consistent estimators that are quadratic with respect to the received signal. Although the obtained boundis not general, it applies to most NDA synchronizers proposed in the literature. A closed-form expression of the conditional CRBis obtained, and numerical results confirm that the CML TED attains the new bound for moderate to high Eg/No.
Resumo:
This paper addresses the estimation of the code-phase(pseudorange) and the carrier-phase of the direct signal received from a direct-sequence spread-spectrum satellite transmitter. Thesignal is received by an antenna array in a scenario with interferenceand multipath propagation. These two effects are generallythe limiting error sources in most high-precision positioning applications.A new estimator of the code- and carrier-phases is derivedby using a simplified signal model and the maximum likelihood(ML) principle. The simplified model consists essentially ofgathering all signals, except for the direct one, in a component withunknown spatial correlation. The estimator exploits the knowledgeof the direction-of-arrival of the direct signal and is much simplerthan other estimators derived under more detailed signal models.Moreover, we present an iterative algorithm, that is adequate for apractical implementation and explores an interesting link betweenthe ML estimator and a hybrid beamformer. The mean squarederror and bias of the new estimator are computed for a numberof scenarios and compared with those of other methods. The presentedestimator and the hybrid beamforming outperform the existingtechniques of comparable complexity and attains, in manysituations, the Cramér–Rao lower bound of the problem at hand.
Resumo:
Une fois déposé, un sédiment est affecté au cours de son enfouissement par un ensemble de processus, regroupé sous le terme diagenèse, le transformant parfois légèrement ou bien suffisamment pour le rendre méconnaissable. Ces modifications ont des conséquences sur les propriétés pétrophysiques qui peuvent être positives ou négatives, c'est-à-dire les améliorer ou bien les détériorer. Une voie alternative de représentation numérique des processus, affranchie de l'utilisation des réactions physico-chimiques, a été adoptée et développée en mimant le déplacement du ou des fluides diagénétiques. Cette méthode s'appuie sur le principe d'un automate cellulaire et permet de simplifier les phénomènes sans sacrifier le résultat et permet de représenter les phénomènes diagénétiques à une échelle fine. Les paramètres sont essentiellement numériques ou mathématiques et nécessitent d'être mieux compris et renseignés à partir de données réelles issues d'études d'affleurements et du travail analytique effectué. La représentation des phénomènes de dolomitisation de faible profondeur suivie d'une phase de dédolomitisation a été dans un premier temps effectuée. Le secteur concerne une portion de la série carbonatée de l'Urgonien (Barrémien-Aptien), localisée dans le massif du Vercors en France. Ce travail a été réalisé à l'échelle de la section afin de reproduire les géométries complexes associées aux phénomènes diagénétiques et de respecter les proportions mesurées en dolomite. De plus, la dolomitisation a été simulée selon trois modèles d'écoulement. En effet, la dédolomitisation étant omniprésente, plusieurs hypothèses sur le mécanisme de dolomitisation ont été énoncées et testées. Plusieurs phases de dolomitisation per ascensum ont été également simulées sur des séries du Lias appartenant aux formations du groupe des Calcaire Gris, localisées au nord-est de l'Italie. Ces fluides diagénétiques empruntent le réseau de fracturation comme vecteur et affectent préférentiellement les lithologies les plus micritisées. Cette étude a permis de mettre en évidence la propagation des phénomènes à l'échelle de l'affleurement. - Once deposited, sediment is affected by diagenetic processes during their burial history. These diagenetic processes are able to affect the petrophysical properties of the sedimentary rocks and also improve as such their reservoir capacity. The modelling of diagenetic processes in carbonate reservoirs is still a challenge as far as neither stochastic nor physicochemical simulations can correctly reproduce the complexity of features and the reservoir heterogeneity generated by these processes. An alternative way to reach this objective deals with process-like methods, which simplify the algorithms while preserving all geological concepts in the modelling process. The aim of the methodology is to conceive a consistent and realistic 3D model of diagenetic overprints on initial facies resulting in petrophysical properties at a reservoir scale. The principle of the method used here is related to a lattice gas automata used to mimic diagenetic fluid flows and to reproduce the diagenetic effects through the evolution of mineralogical composition and petrophysical properties. This method developed in a research group is well adapted to handle dolomite reservoirs through the propagation of dolomitising fluids and has been applied on two case studies. The first study concerns a mid-Cretaceous rudist and granular platform of carbonate succession (Urgonian Fm., Les Gorges du Nan, Vercors, SE France), in which several main diagenetic stages have been identified. The modelling in 2D is focused on dolomitisation followed by a dédolomitisation stage. For the second study, data collected from outcrops on the Venetian platform (Lias, Mont Compomolon NE Italy), in which several diagenetic stages have been identified. The main one is related to per ascensum dolomitisation along fractures. In both examples, the evolution of the effects of the mimetic diagenetic fluid on mineralogical composition can be followed through space and numerical time and help to understand the heterogeneity in reservoir properties. Carbonates, dolomitisation, dédolomitisation, process-like modelling, lattice gas automata, random walk, memory effect.
Resumo:
We propose robust estimators of the generalized log-gamma distribution and, more generally, of location-shape-scale families of distributions. A (weighted) Q tau estimator minimizes a tau scale of the differences between empirical and theoretical quantiles. It is n(1/2) consistent; unfortunately, it is not asymptotically normal and, therefore, inconvenient for inference. However, it is a convenient starting point for a one-step weighted likelihood estimator, where the weights are based on a disparity measure between the model density and a kernel density estimate. The one-step weighted likelihood estimator is asymptotically normal and fully efficient under the model. It is also highly robust under outlier contamination. Supplementary materials are available online.
Resumo:
En aquest treball estudiem si el valor intrínsec de Tubacex entre 1994-2013 coincideix amb la seva tendència bursàtil a llarg termini, tenint en compte part de la teoria defensada per Shiller. També verifiquem la possible infravaloració de l’acció de Tubacex a 31/12/13. A la primera part expliquem els principals mètodes de valoració d’empreses y a la segona part fem una anàlisi del sector en el que opera Tubacex (acer inoxidable) i calculem el valor de l’acció de Tubacex per mitjà de tres mètodes de valoració (Free Cash Flow, Cash Flow i Valor en Llibres). Apliquem aquests tres mètodes de valoració per verificar si com a mínim algun d’ells coincideix amb la tendència bursàtil a llarg termini.
Resumo:
Intravascular brachytherapy with beta sources has become a useful technique to prevent restenosis after cardiovascular intervention. In particular, the Beta-Cath high-dose-rate system, manufactured by Novoste Corporation, is a commercially available 90Sr 90Y source for intravascular brachytherapy that is achieving widespread use. Its dosimetric characterization has attracted considerable attention in recent years. Unfortunately, the short ranges of the emitted beta particles and the associated large dose gradients make experimental measurements particularly difficult. This circumstance has motivated the appearance of a number of papers addressing the characterization of this source by means of Monte Carlo simulation techniques.
Resumo:
Chemical analysis is a well-established procedure for the provenancing of archaeological ceramics. Various analytical techniques are routinely used and large amounts of data have been accumulated so far in data banks. However, in order to exchange results obtained by different laboratories, the respective analytical procedures need to be tested in terms of their inter-comparability. In this study, the schemes of analysis used in four laboratories that are involved in archaeological pottery studies on a routine basis were compared. The techniques investigated were neutron activation analysis (NAA), X-ray fluorescence analysis (XRF), inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS). For this comparison series of measurements on different geological standard reference materials (SRM) were carried out and the results were statistically evaluated. An attempt was also made towards the establishment of calibration factors between pairs of analytical setups in order to smooth the systematic differences among the results.
Resumo:
This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.