975 resultados para Point measurement


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ground state thermal neutron cross section and the resonance integral for the (165)Ho(n, gamma)(166)Ho reaction in thermal and 1/E regions, respectively, of a thermal reactor neutron spectrum have been measured experimentally by activation technique. The reaction product, (166)Ho in the ground state, is gaining considerable importance as a therapeutic radionuclide and precisely measured data of the reaction are of significance from the fundamental point of view as well as for application. In this work, the spectrographically pure holmium oxide (Ho(2)O(3)) powder samples were irradiated with and without cadmium covers at the IEA-RI reactor (IPEN, Sao Paulo), Brazil. The deviation of the neutron spectrum shape from 1/E law was measured by co-irradiating Co, Zn, Zr and Au activation detectors with thermal and epithermal neutrons followed by regression and iterative procedures. The magnitudes of the discrepancies that can occur in measurements made with the ideal 1/E law considerations in the epithermal range were studied. The measured thermal neutron cross section at the Maxwellian averaged thermal energy of 0.0253 eV is 59.0 +/- 2.1 b and for the resonance integral 657 +/- 36b. The results are measured with good precision and indicated a consistency trend to resolve the discrepant status of the literature data. The results are compared with the values in main libraries such as ENDF/B-VII, JEF-2.2 and JENDL-3.2, and with other measurements in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to survey radiographic measurement estimation in the assessment of dental implant length according to dentists' confidence. A 19-point questionnaire with closed-ended questions was used by two graduate students to interview 69 dentists during a dental implant meeting. Included were 12 questions related to over- and underestimation in three radiographic modalities: panoramic (P), conventional tomography (T), and computerized tomography (CT). The database was analyzed by Epi-Info 6.04 software and the values from two radiographic modalities, P and T, were compared using a chi2 test. The results showed that 38.24% of the dentists' confidence was in the overestimation of measurements in P, 30.56% in T, and 0% in CT. On the other hand, considering the underestimated measurements, the percentages were 47.06% in P, 33.33% in T, and 1.92% in CT. The frequency of under- and overestimation were statistically significant (chi2 = 6.32; P = .0425) between P and T. CT was the radiographic modality with higher measurement precision according to dentists' confidence. In conclusion, the interviewed dentists felt that CT was the best radiographic modality when considering the measurement estimation precision in preoperative dental implant assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of the utility harmonic impedance based on measurements is a significant task for utility power-quality improvement and management. Compared to those well-established, accurate invasive methods, the noninvasive methods are more desirable since they work with natural variations of the loads connected to the point of common coupling (PCC), so that no intentional disturbance is needed. However, the accuracy of these methods has to be improved. In this context, this paper first points out that the critical problem of the noninvasive methods is how to select the measurements that can be used with confidence for utility harmonic impedance calculation. Then, this paper presents a new measurement technique which is based on the complex data-based least-square regression, combined with two techniques of data selection. Simulation and field test results show that the proposed noninvasive method is practical and robust so that it can be used with confidence to determine the utility harmonic impedances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis comes after a strong contribution on the realization of the CMS computing system, which can be seen as a relevant part of the experiment itself. A physics analysis completes the road from Monte Carlo production and analysis tools realization to the final physics study which is the actual goal of the experiment. The topic of physics work of this thesis is the study of tt events fully hadronic decay in the CMS experiment. A multi-jet trigger has been provided to fix a reasonable starting point, reducing the multi-jet sample to the nominal trigger rate. An offline selection has been provided to reduce the S/B ratio. The b-tag is applied to provide a further S/B improvement. The selection is applied to the background sample and to the samples generated at different top quark masses. The top quark mass candidate is reconstructed for all those samples using a kinematic fitter. The resulting distributions are used to build p.d.f.’s, interpolating them with a continuous arbitrary curve. These curves are used to perform the top mass measurement through a likelihood comparison

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality control of medical radiological systems is of fundamental importance, and requires efficient methods for accurately determine the X-ray source spectrum. Straightforward measurements of X-ray spectra in standard operating require the limitation of the high photon flux, and therefore the measure has to be performed in a laboratory. However, the optimal quality control requires frequent in situ measurements which can be only performed using a portable system. To reduce the photon flux by 3 magnitude orders an indirect technique based on the scattering of the X-ray source beam by a solid target is used. The measured spectrum presents a lack of information because of transport and detection effects. The solution is then unfolded by solving the matrix equation that represents formally the scattering problem. However, the algebraic system is ill-conditioned and, therefore, it is not possible to obtain a satisfactory solution. Special strategies are necessary to circumvent the ill-conditioning. Numerous attempts have been done to solve this problem by using purely mathematical methods. In this thesis, a more physical point of view is adopted. The proposed method uses both the forward and the adjoint solutions of the Boltzmann transport equation to generate a better conditioned linear algebraic system. The procedure has been tested first on numerical experiments, giving excellent results. Then, the method has been verified with experimental measurements performed at the Operational Unit of Health Physics of the University of Bologna. The reconstructed spectra have been compared with the ones obtained with straightforward measurements, showing very good agreement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Groundwater represents one of the most important resources of the world and it is essential to prevent its pollution and to consider remediation intervention in case of contamination. According to the scientific community the characterization and the management of the contaminated sites have to be performed in terms of contaminant fluxes and considering their spatial and temporal evolution. One of the most suitable approach to determine the spatial distribution of pollutant and to quantify contaminant fluxes in groundwater is using control panels. The determination of contaminant mass flux, requires measurement of contaminant concentration in the moving phase (water) and velocity/flux of the groundwater. In this Master Thesis a new solute flux mass measurement approach, based on an integrated control panel type methodology combined with the Finite Volume Point Dilution Method (FVPDM), for the monitoring of transient groundwater fluxes, is proposed. Moreover a new adsorption passive sampler, which allow to capture the variation of solute concentration with time, is designed. The present work contributes to the development of this approach on three key points. First, the ability of the FVPDM to monitor transient groundwater fluxes was verified during a step drawdown test at the experimental site of Hermalle Sous Argentau (Belgium). The results showed that this method can be used, with optimal results, to follow transient groundwater fluxes. Moreover, it resulted that performing FVPDM, in several piezometers, during a pumping test allows to determine the different flow rates and flow regimes that can occurs in the various parts of an aquifer. The second field test aiming to determine the representativity of a control panel for measuring mass flus in groundwater underlined that wrong evaluations of Darcy fluxes and discharge surfaces can determine an incorrect estimation of mass fluxes and that this technique has to be used with precaution. Thus, a detailed geological and hydrogeological characterization must be conducted, before applying this technique. Finally, the third outcome of this work concerned laboratory experiments. The test conducted on several type of adsorption material (Oasis HLB cartridge, TDS-ORGANOSORB 10 and TDS-ORGANOSORB 10-AA), in order to determine the optimum medium to dimension the passive sampler, highlighted the necessity to find a material with a reversible adsorption tendency to completely satisfy the request of the new passive sampling technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new response time measure of evaluations, the Evaluative Movement Assessment (EMA). Two properties are verified for the first time in a response time measure: (a) mapping of multiple attitude objects to a single scale, and (b) centering that scale around a neutral point. Property (a) has implications when self-report and response time measures of attitudes have a low correlation. A study using EMA as an indirect measure revealed a low correlation with self-reported attitudes when the correlation reflected between-subjects differences in preferences for one attitude object to a second. Previously this result may have been interpreted as dissociation between two measures. However, when correlations from the same data reflected within-subject preference rank orders between multiple attitude objects, they were substantial (average r = .64). This result suggests that the presence of low correlations between self-report and response time measures in previous studies may be a reflection of methodological aspects of the response time measurement techniques. Property (b) has implications for exploring theoretical questions that require assessment of whether an evaluation is positive or negative (e.g., prejudice), because it allows such classifications in response time measurement to be made for the first time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate measurement of abdominal aortic aneurysms is necessary to predict rupture risk and, more recently, to follow aneurysm sac behavior following endovascular repair. Up until this point, aneurysm diameter has been the most common measurement utilized for these purposes. Although aneurysm diameter is predictive of rupture, accurate measurement is hindered by such factors as aortic tortuosity and interobserver variability, and it does not account for variations in morphology such as saccular aneurysms. Additionally, decreases in aneurysm diameter do not completely describe the somewhat complex remodeling seen following endovascular repair of aortic aneurysms. Measurement of aneurysm volume has the advantage of describing aneurysm morphology in a multidimensional fashion, but it has not been readily available or easily measured until recently. This has changed with the introduction of commercially available software tools that permit quicker and easier to perform volume measurements. Whether it is time for volume to replace, or compliment, diameter is the subject of the current debate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Antihydrogen Experiment: Gravity, Interferometry, Spectroscopy (AEgIS) experiment is conducted by an international collaboration based at CERN whose aim is to perform the first direct measurement of the gravitational acceleration of antihydrogen in the local field of the Earth, with Δg/g = 1% precision as a first achievement. The idea is to produce cold (100 mK) antihydrogen ( ¯H) through a pulsed charge exchange reaction by overlapping clouds of antiprotons, from the Antiproton Decelerator (AD) and positronium atoms inside a Penning trap. The antihydrogen has to be produced in an excited Rydberg state to be subsequently accelerated to form a beam. The deflection of the antihydrogen beam can then be measured by using a moir´e deflectometer coupled to a position sensitive detector to register the impact point of the anti-atoms through the vertex reconstruction of their annihilation products. After being approved in late 2008, AEgIS started taking data in a commissioning phase in 2012. This paper presents an outline of the experiment with a brief overview of its physics motivation and of the state-of-the-art of the g measurement on antimatter. Particular attention is given to the current status of the emulsion-based position detector needed to measure the ¯H sag in AEgIS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present experimental results on inclusive spectra and mean multiplicities of negatively charged pions produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158GeV/c (√s = 6.3, 7.7,8.8, 12.3 and 17.3GeV, respectively). The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN super proton synchrotron. Two-dimensional spectra are determined in terms of rapidity and transverse momentum. Their properties such as the width of rapidity distributions and the inverse slope parameter of transverse mass spectra are extracted and their collision energy dependences are presented. The results on inelastic p+p interactions are compared with the corresponding data on central Pb+Pb collisions measured by the NA49 experiment at the CERNSPS. The results presented in this paper are part of the NA61/SHINE ion program devoted to the study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter. They are required for interpretation of results on nucleus–nucleus and proton–nucleus collisions.