968 resultados para Implicit calibration
Resumo:
Based on results of an evaluation performed during the winter of 1985-86, six Troxler 3241-B Asphalt Content Gauges were purchased for District use in monitoring project asphalt contents. Use of these gauges will help reduce the need for chemical based extractions. Effective use of the gauges depends on the accurate preparation and transfer of project mix calibrations from the Central Lab to the Districts. The objective of this project was to evaluate the precision and accuracy of a gauge in determining asphalt contents and to develop a mix calibration transfer procedure for implementation during the 1987 construction. The first part of the study was accomplished by preparing mix calibrations in the Central Lab gauge and taking multiple measurements of a sample with known asphalt content. The second part was accomplished by preparing transfer pans, obtaining count data on the pans using each gauge, and transferring calibrations from one gauge to another through the use of calibration transfer equations. The transferred calibrations were tested by measuring samples with a known asphalt content. The study established that the Troxler 3241-B Asphalt Content Gauge yields results of acceptable accuracy and precision as evidenced by a standard deviation of 0.04% asphalt content on multiple measurements of the same sample. The calibration transfer procedure proved feasible and resulted in the calibration transfer portion of Materials I.M. 335 - Method of Test For Determining the Asphalt Content of Bituminous Mixtures by the Nuclear Method.
Resumo:
In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.
Resumo:
Background and aim of the study: Formation of implicit memory during general anaesthesia is still debated. Perceptual learning is the ability to learn to perceive. In this study, an auditory perceptual learning paradigm, using frequency discrimination, was performed to investigate the implicit memory. It was hypothesized that auditory stimulation would successfully induce perceptual learning. Thus, initial thresholds of the frequency discrimination postoperative task should be lower for the stimulated group (group S) compared to the control group (group C). Material and method: Eighty-seven patients ASA I-III undergoing visceral and orthopaedic surgery during general anaesthesia lasting more than 60 minutes were recruited. The anaesthesia procedure was standardized (BISR monitoring included). Group S received auditory stimulation (2000 pure tones applied for 45 minutes) during the surgery. Twenty-four hours after the operation, both groups performed ten blocks of the frequency discrimination task. Mean of the thresholds for the first three blocks (T1) were compared between groups. Results: Mean age and BIS value of group S and group C are respectively 40 } 11 vs 42 } 11 years (p = 0,49) and 42 } 6 vs 41 } 8 (p = 0.87). T1 is respectively 31 } 33 vs 28 } 34 (p = 0.72) in group S and C. Conclusion: In our study, no implicit memory during general anaesthesia was demonstrated. This may be explained by a modulation of the auditory evoked potentials caused by the anaesthesia, or by an insufficient longer time of repetitive stimulation to induce perceptual learning.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
Une fois déposé, un sédiment est affecté au cours de son enfouissement par un ensemble de processus, regroupé sous le terme diagenèse, le transformant parfois légèrement ou bien suffisamment pour le rendre méconnaissable. Ces modifications ont des conséquences sur les propriétés pétrophysiques qui peuvent être positives ou négatives, c'est-à-dire les améliorer ou bien les détériorer. Une voie alternative de représentation numérique des processus, affranchie de l'utilisation des réactions physico-chimiques, a été adoptée et développée en mimant le déplacement du ou des fluides diagénétiques. Cette méthode s'appuie sur le principe d'un automate cellulaire et permet de simplifier les phénomènes sans sacrifier le résultat et permet de représenter les phénomènes diagénétiques à une échelle fine. Les paramètres sont essentiellement numériques ou mathématiques et nécessitent d'être mieux compris et renseignés à partir de données réelles issues d'études d'affleurements et du travail analytique effectué. La représentation des phénomènes de dolomitisation de faible profondeur suivie d'une phase de dédolomitisation a été dans un premier temps effectuée. Le secteur concerne une portion de la série carbonatée de l'Urgonien (Barrémien-Aptien), localisée dans le massif du Vercors en France. Ce travail a été réalisé à l'échelle de la section afin de reproduire les géométries complexes associées aux phénomènes diagénétiques et de respecter les proportions mesurées en dolomite. De plus, la dolomitisation a été simulée selon trois modèles d'écoulement. En effet, la dédolomitisation étant omniprésente, plusieurs hypothèses sur le mécanisme de dolomitisation ont été énoncées et testées. Plusieurs phases de dolomitisation per ascensum ont été également simulées sur des séries du Lias appartenant aux formations du groupe des Calcaire Gris, localisées au nord-est de l'Italie. Ces fluides diagénétiques empruntent le réseau de fracturation comme vecteur et affectent préférentiellement les lithologies les plus micritisées. Cette étude a permis de mettre en évidence la propagation des phénomènes à l'échelle de l'affleurement. - Once deposited, sediment is affected by diagenetic processes during their burial history. These diagenetic processes are able to affect the petrophysical properties of the sedimentary rocks and also improve as such their reservoir capacity. The modelling of diagenetic processes in carbonate reservoirs is still a challenge as far as neither stochastic nor physicochemical simulations can correctly reproduce the complexity of features and the reservoir heterogeneity generated by these processes. An alternative way to reach this objective deals with process-like methods, which simplify the algorithms while preserving all geological concepts in the modelling process. The aim of the methodology is to conceive a consistent and realistic 3D model of diagenetic overprints on initial facies resulting in petrophysical properties at a reservoir scale. The principle of the method used here is related to a lattice gas automata used to mimic diagenetic fluid flows and to reproduce the diagenetic effects through the evolution of mineralogical composition and petrophysical properties. This method developed in a research group is well adapted to handle dolomite reservoirs through the propagation of dolomitising fluids and has been applied on two case studies. The first study concerns a mid-Cretaceous rudist and granular platform of carbonate succession (Urgonian Fm., Les Gorges du Nan, Vercors, SE France), in which several main diagenetic stages have been identified. The modelling in 2D is focused on dolomitisation followed by a dédolomitisation stage. For the second study, data collected from outcrops on the Venetian platform (Lias, Mont Compomolon NE Italy), in which several diagenetic stages have been identified. The main one is related to per ascensum dolomitisation along fractures. In both examples, the evolution of the effects of the mimetic diagenetic fluid on mineralogical composition can be followed through space and numerical time and help to understand the heterogeneity in reservoir properties. Carbonates, dolomitisation, dédolomitisation, process-like modelling, lattice gas automata, random walk, memory effect.
Resumo:
Many concepts have been developed to describe the convergence of media, languages, and formats in contemporary media systems. This article is a theoretical reflection on “transmedia storytelling” from a perspective that integrates semiotics and narratology in the context of media studies. After dealing with the conceptual chaos around transmedia storytelling, the article analyzes how these new multimodal narrative structures create different implicit consumers and construct a narrative world. The analysis includes a description of the multimedia textual structure created around the Fox television series 24. Finally, the article analyzes transmedia storytelling from the perspective of a semiotics of branding.
Resumo:
Intravascular brachytherapy with beta sources has become a useful technique to prevent restenosis after cardiovascular intervention. In particular, the Beta-Cath high-dose-rate system, manufactured by Novoste Corporation, is a commercially available 90Sr 90Y source for intravascular brachytherapy that is achieving widespread use. Its dosimetric characterization has attracted considerable attention in recent years. Unfortunately, the short ranges of the emitted beta particles and the associated large dose gradients make experimental measurements particularly difficult. This circumstance has motivated the appearance of a number of papers addressing the characterization of this source by means of Monte Carlo simulation techniques.
Resumo:
Chemical analysis is a well-established procedure for the provenancing of archaeological ceramics. Various analytical techniques are routinely used and large amounts of data have been accumulated so far in data banks. However, in order to exchange results obtained by different laboratories, the respective analytical procedures need to be tested in terms of their inter-comparability. In this study, the schemes of analysis used in four laboratories that are involved in archaeological pottery studies on a routine basis were compared. The techniques investigated were neutron activation analysis (NAA), X-ray fluorescence analysis (XRF), inductively coupled plasma optical emission spectrometry (ICP-OES) and inductively coupled plasma mass spectrometry (ICP-MS). For this comparison series of measurements on different geological standard reference materials (SRM) were carried out and the results were statistically evaluated. An attempt was also made towards the establishment of calibration factors between pairs of analytical setups in order to smooth the systematic differences among the results.
Resumo:
A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
Objective The authors have sought to study the calibration of a clinical PKA meter (Diamentor E2) and a calibrator for clinical meters (PDC) in the Laboratory of Ionizing Radiation Metrology at Instituto de Energia e Ambiente - Universidade de São Paulo. Materials and Methods Different qualities of both incident and transmitted beams were utilized in conditions similar to a clinical setting, analyzing the influence from the reference dosimeter, from the distance between meters, from the filtration and from the average beam energy. Calibrations were performed directly against a standard 30 cm3 cylindrical chamber or a parallel-plate monitor chamber, and indirectly against the PDC meter. Results The lowest energy dependence was observed for transmitted beams. The cross calibration between the Diamentor E2 and the PDC meters, and the PDC presented the greatest propagation of uncertainties. Conclusion The calibration coefficient of the PDC meter showed to be more stable with voltage, while the Diamentor E2 calibration coefficient was more variable. On the other hand, the PDC meter presented greater uncertainty in readings (5.0%) than with the use of the monitor chamber (3.5%) as a reference.
Resumo:
This paper proposes a calibration method which can be utilized for the analysis of SEM images. The field of application of the developed method is a calculation of surface potential distribution of biased silicon edgeless detector. The suggested processing of the data collected by SEM consists of several stages and takes into account different aspects affecting the SEM image. The calibration method doesn’t pretend to be precise but at the same time it gives the basics of potential distribution when the different biasing voltages applied to the detector.
Resumo:
This thesis presents the calibration and comparison of two systems, a machine vision system that uses 3 channel RGB images and a line scanning spectral system. Calibration. is the process of checking and adjusting the accuracy of a measuring instrument by comparing it with standards. For the RGB system self-calibrating methods for finding various parameters of the imaging device were developed. Color calibration was done and the colors produced by the system were compared to the known colors values of the target. Software drivers for the Sony Robot were also developed and a mechanical part to connect a camera to the robot was also designed. For the line scanning spectral system, methods for the calibrating the alignment of the system and the measurement of the dimensions of the line scanned by the system were developed. Color calibration of the spectral system is also presented.
Resumo:
Traditionally, in the cigarettes industry, the determination of ammonium ion in the mainstream smoke is performed by ion chromatography. This work studies this determination and compares the results of this technique with the use of external and internal standard calibration. A reference cigarette sample presented measurement uncertainty of 2.0 μg/cigarette and 1.5 μg/cigarette, with external and internal standard, respectively. It is observed that the greatest source of uncertainty is the bias correction factor and that it is even more significant when using external standard, confirming thus the importance of internal standardization for this correction.