1000 resultados para EFFICIENCY CALIBRATION


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research of the in-beam efficiency calibration of Neutron Detector Array of Peking University using N-17 and C-16 beams was introduced in this paper. The efficiency of neutron wall and ball are comparable to the foreign similar devices and neutrons can be detected from low to high energies in high efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The triple- and quadruple-escape peaks of 6.128 MeV photons from the (19)F(p,alpha gamma)(16)O nuclear reaction were observed in an HPGe detector. The experimental peak areas, measured in spectra projected with a restriction function that allows quantitative comparison of data from different multiplicities, are in reasonably good agreement with those predicted by Monte Carlo simulations done with the general-purpose radiation-transport code PENELOPE. The behaviour of the escape intensities was simulated for some gamma-ray energies and detector dimensions; the results obtained can be extended to other energies using an empirical function and statistical properties related to the phenomenon. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We measured the K-41 thermal neutron absorption and resonance integral cross sections after the irradiation of KNO3 samples near the core of the IEA-R1 IPEN pool-type research reactor. Bare and cadmium-covered targets were irradiated in pairs with Au-Al alloy flux-monitors. The residual activities were measured by gamma-ray spectroscopy with a HPGe detector, with special care to avoid the K-42 decay beta(-) emission effects on the spectra. The gamma-ray self-absorption was corrected with the help of MCNP simulations. We applied the Westcott formalism in the average neutron flux determination and calculated the depression coefficients for thermal and epithermal neutrons due to the sample thickness with analytical approximations. We obtained 1.57(4) and 1.02(4) b, for thermal and resonance integral cross sections, respectively, with correlation coefficient equal to 0.39.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A measurement of the top-quark pair-production cross section in ppbar collisions at sqrt{s}=1.96 TeV using data corresponding to an integrated luminosity of 1.12/fb collected with the Collider Detector at Fermilab is presented. Decays of top-quark pairs into the final states e nu + jets and mu nu + jets are selected, and the cross section and the b-jet identification efficiency are determined using a new measurement technique which requires that the measured cross sections with exactly one and multiple identified b-quarks from the top-quark decays agree. Assuming a top-quark mass of 175 GeV/c^2, a cross section of 8.5+/-0.6(stat.)+/-0.7(syst.) pb is measured.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contributions are results from simulation experiments designed to measure the accuracy of statistical inferences derived from some of these models. Our results show that a model commonly used to analyze calibration data can provide unreliable statistical results when there is between-tow spatial variation in the stock densities at each paired-tow site. However, a generalized linear mixed-effects model gave very reliable results over a wide range of spatial variations in densities and we recommend it for the analysis of paired-tow survey calibration data. This conclusion also applies if there is between-tow variation in catchability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Multi-media Sensor Networks (WMSNs) have become increasingly popular in recent years, driven in part by the increasing commoditization of small, low-cost CMOS sensors. As such, the challenge of automatically calibrating these types of cameras nodes has become an important research problem, especially for the case when a large quantity of these type of devices are deployed. This paper presents a method for automatically calibrating a wireless camera node with the ability to rotate around one axis. The method involves capturing images as the camera is rotated and computing the homographies between the images. The camera parameters, including focal length, principal point and the angle and axis of rotation can then recovered from two or more homographies. The homography computation algorithm is designed to deal with the limited resources of the wireless sensor and to minimize energy con- sumption. In this paper, a modified RANdom SAmple Consensus (RANSAC) algorithm is proposed to effectively increase the efficiency and reliability of the calibration procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variable Speed Limits (VSL) is a control tool of Intelligent Transportation Systems (ITS) which can enhance traffic safety and which has the potential to contribute to traffic efficiency. This study presents the results of a calibration and operational analysis of a candidate VSL algorithm for high flow conditions on an urban motorway of Queensland, Australia. The analysis was done using a framework consisting of a microscopic simulation model combined with runtime API and a proposed efficiency index. The operational analysis includes impacts on speed-flow curve, travel time, speed deviation, fuel consumption and emission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approach for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera’s optical center and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. Previous methods for auto-calibration of cameras based on pure rotations fail to work in these two degenerate cases. In addition, our approach includes a modified RANdom SAmple Consensus (RANSAC) algorithm, as well as improved integration of the radial distortion coefficient in the computation of inter-image homographies. We show that these modifications are able to increase the overall efficiency, reliability and accuracy of the homography computation and calibration procedure using both synthetic and real image sequences

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of new emerging technologies, such as cooperative systems, allows the traffic community to foresee relevant improvements in terms of traffic safety and efficiency. Autonomous vehicles are able to share information about the local traffic state in real time, which could result in a better reaction to the mechanism of traffic jam formation. An upstream single-hop radio broadcast network can improve the perception of each cooperative driver within a specific radio range and hence the traffic stability. The impact of vehicle to vehicle cooperation on the onset of traffic congestion is investigated analytically and through simulation. A next generation simulation field dataset is used to calibrate the full velocity difference car-following model, and the MOBIL lane-changing model is implemented. The robustness of the calibration as well as the heterogeneity of the drivers is discussed. Assuming that congestion can be triggered either by the heterogeneity of drivers' behaviours or abnormal lane-changing behaviours, the calibrated car-following model is used to assess the impact of a microscopic cooperative law on egoistic lane-changing behaviours. The cooperative law can help reduce and delay traffic congestion and can have a positive effect on safety indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for standardization of the measured blow count number N-spt into a normalized reference energy value is now fully recognized. The present paper extends the existing theoretical approach using the wave propagation theory as framework and introduces an analysis for large displacements enabling the influence of rod length in the measured N-spt values to be quantified. The study is based on both calibration chamber and field tests. Energy measurements are monitored in two different positions: below the anvil and above the sampler. Both experimental and numerical results demonstrate that whereas the energy delivered into the rod stem is expressed as a ratio of the theoretical free-fall energy of the hammer, the effective sampler energy is a function of the hammer height of fall, sampler permanent penetration, and weight of both hammer and rods. Influence of rod length is twofold and produces opposite effects: wave energy losses increase with increasing rod length and in a long rod composition the gain in potential energy from rod weight is significant and may partially compensate measured energy losses. Based on this revised approach, an analytical solution is proposed to calculate the energy delivered to the sampler and efficiency coefficients are suggested to account for energy losses during the energy transference process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eucalyptus plantations occupy almost 20 million ha worldwide and exceed 3.7 million ha in Brazil alone. Improved genetics and silviculture have led to as much as a three-fold increase in productivity in Eucalyptus plantations in Brazil and the large land area occupied by these highly productive ecosystems raises concern over their effect on local water supplies. As part of the Brazil Potential Productivity Project, we measured water use of Eucalyptus grandis x urophylla clones in rainfed and irrigated stands in two plantations differing in productivity. The Aracruz (lower productivity) site is located in the state of Espirito Santo and the Veracel (higher productivity) site in Bahia state. At each plantation, we measured stand water use using homemade sap flow sensors and a calibration curve using the clones and probes we utilized in the study. We also quantified changes in growth, leaf area and water use efficiency (the amount of wood produced per unit of water transpired). Measurements were conducted for 1 year during 2005 at Aracruz and from August through December 2005 at Veracel. Transpiration at both sites was high compared to other studies but annual estimates at Aracruz for the rainfed treatment compared well with a process model calibrated for the Aracruz site (within 10%). Annual water use at Aracruz was 1394 mm in rainfed treatments versus 1779 mm in irrigated treatments and accounted for approximately 67% and 58% of annual precipitation and irrigation inputs respectively. Increased water use in the irrigated stands at Aracruz was associated with higher sapwood area, leaf area index and transpiration per unit leaf area but there was no difference in the response of canopy conductance with air saturation deficit between treatments. Water use efficiency at the Aracruz site was also not influenced by irrigation and was similar to the rainfed treatment. During the period of overlapping measurements, the response to irrigation treatments at the more productive Veracel site was similar to Aracruz. Stand water use at the Veracel site totaled 975 mm and 1102 mm in rainfed and irrigated treatments during the 5-month measurement period respectively. Irrigated stands at Veracel also had higher leaf area with no difference in the response of canopy conductance with air saturation deficit between treatments. Water use efficiency was also unaffected by irrigation at Veracel. Results from this and other studies suggest that improved resource availability does not negatively impact water use efficiency but increased productivity of these plantations is associated with higher water use and should be given consideration during plantation management decision making processes aimed at increasing productivity. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.