988 resultados para Fantôme de calibration
Resumo:
Electronic distance measuring instruments (EDMI) are used by surveyors in routine length measurements. The constant and scale factors of the instrument tend to change due to usage, transportation, and aging of crystals. Calibration baselines are established to enable surveyors to check the instruments and determine any changes in the values of constant and scale factors. The National Geodetic Survey (NGS) has developed guidelines for establishing these baselines. In 1981 an EDMI baseline at ISU was established according to NGS guidelines. In October 1982, the NGS measured the distances between monuments. Computer programs for reducing observed distances were developed. Mathematical model and computer programs for determining constant and scale factors were developed. A method was developed to detect any movements of the monuments. Periodic measurements of the baseline were made. No significant movement of the monuments was detected.
Resumo:
Bioassays with bioreporter bacteria are usually calibrated with analyte solutions of known concentrations that are analysed along with the samples of interest. This is done as bioreporter output (the intensity of light, fluorescence or colour) does not only depend on the target concentration, but also on the incubation time and physiological activity of the cells in the assay. Comparing the bioreporter output with standardized colour tables in the field seems rather difficult and error-prone. A new approach to control assay variations and improve application ease could be an internal calibration based on the use of multiple bioreporter cell lines with drastically different reporter protein outputs at a given analyte concentration. To test this concept, different Escherichia coli-based bioreporter strains expressing either cytochrome c peroxidase (CCP, or CCP mutants) or β-galactosidase upon induction with arsenite were constructed. The reporter strains differed either in the catalytic activity of the reporter protein (for CCP) or in the rates of reporter protein synthesis (for β-galactosidase), which, indeed, resulted in output signals with different intensities at the same arsenite concentration. Hence, it was possible to use combinations of these cell lines to define arsenite concentration ranges at which none, one or more cell lines gave qualitative (yes/no) visible signals that were relatively independent of incubation time or bioreporter activity. The discriminated concentration ranges would fit very well with the current permissive (e.g. World Health Organization) levels of arsenite in drinking water (10 µg l−1).
Resumo:
The Highway Safety Manual is the national safety manual that provides quantitative methods for analyzing highway safety. The HSM presents crash modification factors related to work zone characteristics such as work zone duration and length. These crash modification factors were based on high-impact work zones in California. Therefore there was a need to use work zone and safety data from the Midwest to calibrate these crash modification factors for use in the Midwest. Almost 11,000 Missouri freeway work zones were analyzed to derive a representative and stratified sample of 162 work zones. The 162 work zones was more than four times the number of work zones used in the HSM. This dataset was used for modeling and testing crash modification factors applicable to the Midwest. The dataset contained work zones ranging from 0.76 mile to 9.24 miles and with durations from 16 days to 590 days. A combined fatal/injury/non-injury model produced a R2 fit of 0.9079 and a prediction slope of 0.963. The resulting crash modification factors of 1.01 for duration and 0.58 for length were smaller than the values in the HSM. Two practical application examples illustrate the use of the crash modification factors for comparing alternate work zone setups.
Resumo:
Based on results of an evaluation performed during the winter of 1985-86, six Troxler 3241-B Asphalt Content Gauges were purchased for District use in monitoring project asphalt contents. Use of these gauges will help reduce the need for chemical based extractions. Effective use of the gauges depends on the accurate preparation and transfer of project mix calibrations from the Central Lab to the Districts. The objective of this project was to evaluate the precision and accuracy of a gauge in determining asphalt contents and to develop a mix calibration transfer procedure for implementation during the 1987 construction. The first part of the study was accomplished by preparing mix calibrations in the Central Lab gauge and taking multiple measurements of a sample with known asphalt content. The second part was accomplished by preparing transfer pans, obtaining count data on the pans using each gauge, and transferring calibrations from one gauge to another through the use of calibration transfer equations. The transferred calibrations were tested by measuring samples with a known asphalt content. The study established that the Troxler 3241-B Asphalt Content Gauge yields results of acceptable accuracy and precision as evidenced by a standard deviation of 0.04% asphalt content on multiple measurements of the same sample. The calibration transfer procedure proved feasible and resulted in the calibration transfer portion of Materials I.M. 335 - Method of Test For Determining the Asphalt Content of Bituminous Mixtures by the Nuclear Method.
Resumo:
In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.
Resumo:
Une fois déposé, un sédiment est affecté au cours de son enfouissement par un ensemble de processus, regroupé sous le terme diagenèse, le transformant parfois légèrement ou bien suffisamment pour le rendre méconnaissable. Ces modifications ont des conséquences sur les propriétés pétrophysiques qui peuvent être positives ou négatives, c'est-à-dire les améliorer ou bien les détériorer. Une voie alternative de représentation numérique des processus, affranchie de l'utilisation des réactions physico-chimiques, a été adoptée et développée en mimant le déplacement du ou des fluides diagénétiques. Cette méthode s'appuie sur le principe d'un automate cellulaire et permet de simplifier les phénomènes sans sacrifier le résultat et permet de représenter les phénomènes diagénétiques à une échelle fine. Les paramètres sont essentiellement numériques ou mathématiques et nécessitent d'être mieux compris et renseignés à partir de données réelles issues d'études d'affleurements et du travail analytique effectué. La représentation des phénomènes de dolomitisation de faible profondeur suivie d'une phase de dédolomitisation a été dans un premier temps effectuée. Le secteur concerne une portion de la série carbonatée de l'Urgonien (Barrémien-Aptien), localisée dans le massif du Vercors en France. Ce travail a été réalisé à l'échelle de la section afin de reproduire les géométries complexes associées aux phénomènes diagénétiques et de respecter les proportions mesurées en dolomite. De plus, la dolomitisation a été simulée selon trois modèles d'écoulement. En effet, la dédolomitisation étant omniprésente, plusieurs hypothèses sur le mécanisme de dolomitisation ont été énoncées et testées. Plusieurs phases de dolomitisation per ascensum ont été également simulées sur des séries du Lias appartenant aux formations du groupe des Calcaire Gris, localisées au nord-est de l'Italie. Ces fluides diagénétiques empruntent le réseau de fracturation comme vecteur et affectent préférentiellement les lithologies les plus micritisées. Cette étude a permis de mettre en évidence la propagation des phénomènes à l'échelle de l'affleurement. - Once deposited, sediment is affected by diagenetic processes during their burial history. These diagenetic processes are able to affect the petrophysical properties of the sedimentary rocks and also improve as such their reservoir capacity. The modelling of diagenetic processes in carbonate reservoirs is still a challenge as far as neither stochastic nor physicochemical simulations can correctly reproduce the complexity of features and the reservoir heterogeneity generated by these processes. An alternative way to reach this objective deals with process-like methods, which simplify the algorithms while preserving all geological concepts in the modelling process. The aim of the methodology is to conceive a consistent and realistic 3D model of diagenetic overprints on initial facies resulting in petrophysical properties at a reservoir scale. The principle of the method used here is related to a lattice gas automata used to mimic diagenetic fluid flows and to reproduce the diagenetic effects through the evolution of mineralogical composition and petrophysical properties. This method developed in a research group is well adapted to handle dolomite reservoirs through the propagation of dolomitising fluids and has been applied on two case studies. The first study concerns a mid-Cretaceous rudist and granular platform of carbonate succession (Urgonian Fm., Les Gorges du Nan, Vercors, SE France), in which several main diagenetic stages have been identified. The modelling in 2D is focused on dolomitisation followed by a dédolomitisation stage. For the second study, data collected from outcrops on the Venetian platform (Lias, Mont Compomolon NE Italy), in which several diagenetic stages have been identified. The main one is related to per ascensum dolomitisation along fractures. In both examples, the evolution of the effects of the mimetic diagenetic fluid on mineralogical composition can be followed through space and numerical time and help to understand the heterogeneity in reservoir properties. Carbonates, dolomitisation, dédolomitisation, process-like modelling, lattice gas automata, random walk, memory effect.
Resumo:
Intravascular brachytherapy with beta sources has become a useful technique to prevent restenosis after cardiovascular intervention. In particular, the Beta-Cath high-dose-rate system, manufactured by Novoste Corporation, is a commercially available 90Sr 90Y source for intravascular brachytherapy that is achieving widespread use. Its dosimetric characterization has attracted considerable attention in recent years. Unfortunately, the short ranges of the emitted beta particles and the associated large dose gradients make experimental measurements particularly difficult. This circumstance has motivated the appearance of a number of papers addressing the characterization of this source by means of Monte Carlo simulation techniques.
Resumo:
Chemical analysis is a well-established procedure for the provenancing of archaeological ceramics. Various analytical techniques are routinely used and large amounts of data have been accumulated so far in data banks. However, in order to exchange results obtained by different laboratories, the respective analytical procedures need to be tested in terms of their inter-comparability. In this study, the schemes of analysis used in four laboratories that are involved in archaeological pottery studies on a routine basis were compared. The techniques investigated were neutron activation analysis (NAA), X-ray fluorescence analysis (XRF), inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS). For this comparison series of measurements on different geological standard reference materials (SRM) were carried out and the results were statistically evaluated. An attempt was also made towards the establishment of calibration factors between pairs of analytical setups in order to smooth the systematic differences among the results.
Resumo:
A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
Objective The authors have sought to study the calibration of a clinical PKA meter (Diamentor E2) and a calibrator for clinical meters (PDC) in the Laboratory of Ionizing Radiation Metrology at Instituto de Energia e Ambiente - Universidade de São Paulo. Materials and Methods Different qualities of both incident and transmitted beams were utilized in conditions similar to a clinical setting, analyzing the influence from the reference dosimeter, from the distance between meters, from the filtration and from the average beam energy. Calibrations were performed directly against a standard 30 cm3 cylindrical chamber or a parallel-plate monitor chamber, and indirectly against the PDC meter. Results The lowest energy dependence was observed for transmitted beams. The cross calibration between the Diamentor E2 and the PDC meters, and the PDC presented the greatest propagation of uncertainties. Conclusion The calibration coefficient of the PDC meter showed to be more stable with voltage, while the Diamentor E2 calibration coefficient was more variable. On the other hand, the PDC meter presented greater uncertainty in readings (5.0%) than with the use of the monitor chamber (3.5%) as a reference.
Resumo:
This paper proposes a calibration method which can be utilized for the analysis of SEM images. The field of application of the developed method is a calculation of surface potential distribution of biased silicon edgeless detector. The suggested processing of the data collected by SEM consists of several stages and takes into account different aspects affecting the SEM image. The calibration method doesn’t pretend to be precise but at the same time it gives the basics of potential distribution when the different biasing voltages applied to the detector.
Resumo:
This thesis presents the calibration and comparison of two systems, a machine vision system that uses 3 channel RGB images and a line scanning spectral system. Calibration. is the process of checking and adjusting the accuracy of a measuring instrument by comparing it with standards. For the RGB system self-calibrating methods for finding various parameters of the imaging device were developed. Color calibration was done and the colors produced by the system were compared to the known colors values of the target. Software drivers for the Sony Robot were also developed and a mechanical part to connect a camera to the robot was also designed. For the line scanning spectral system, methods for the calibrating the alignment of the system and the measurement of the dimensions of the line scanned by the system were developed. Color calibration of the spectral system is also presented.