966 resultados para measurement systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality control of medical radiological systems is of fundamental importance, and requires efficient methods for accurately determine the X-ray source spectrum. Straightforward measurements of X-ray spectra in standard operating require the limitation of the high photon flux, and therefore the measure has to be performed in a laboratory. However, the optimal quality control requires frequent in situ measurements which can be only performed using a portable system. To reduce the photon flux by 3 magnitude orders an indirect technique based on the scattering of the X-ray source beam by a solid target is used. The measured spectrum presents a lack of information because of transport and detection effects. The solution is then unfolded by solving the matrix equation that represents formally the scattering problem. However, the algebraic system is ill-conditioned and, therefore, it is not possible to obtain a satisfactory solution. Special strategies are necessary to circumvent the ill-conditioning. Numerous attempts have been done to solve this problem by using purely mathematical methods. In this thesis, a more physical point of view is adopted. The proposed method uses both the forward and the adjoint solutions of the Boltzmann transport equation to generate a better conditioned linear algebraic system. The procedure has been tested first on numerical experiments, giving excellent results. Then, the method has been verified with experimental measurements performed at the Operational Unit of Health Physics of the University of Bologna. The reconstructed spectra have been compared with the ones obtained with straightforward measurements, showing very good agreement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluorescence correlation spectroscopy (FCS) is a powerful technique to determine the diffusion of fluorescence molecules in various environments. The technique is based on detecting and analyzing the fluctuation of fluorescence light emitted by fluorescence species diffusing through a small and fixed observation volume, formed by a laser focused into the sample. Because of its great potential and high versatility in addressing the diffusion and transport properties in complex systems, FCS has been successfully applied to a great variety of systems. In my thesis, I focused on the application of FCS to study the diffusion of fluorescence molecules in organic environments, especially in polymer melts. In order to examine our FCS setup and a developed measurement protocol, I first utilized FCS to measure tracer diffusion in polystyrene (PS) solutions, for which abundance data exist in the literature. I studied molecular and polymeric tracer diffusion in polystyrene solutions over a broad range of concentrations and different tracer and matrix molecular weights (Mw). Then FCS was further established to study tracer dynamics in polymer melts. In this part I investigated the diffusion of molecular tracers in linear flexible polymer melts [polydimethylsiloxane (PDMS), polyisoprene (PI)], a miscible polymer blend [PI and poly vinyl ethylene (PVE)], and star-shaped polymer [3-arm star polyisoprene (SPI)]. The effects of tracer sizes, polymer Mw, polymer types, and temperature on the diffusion coefficients of small tracers were discussed. The distinct topology of the host polymer, i.e. star polymer melt, revealed the notably different motion of the small tracer, as compared to its linear counterpart. Finally, I emphasized the advantage of the small observation volume which allowed FCS to investigate the tracer diffusions in heterogeneous systems; a swollen cross-linked PS bead and silica inverse opals, where high spatial resolution technique was required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The international growing concern for the human exposure to magnetic fields generated by electric power lines has unavoidably led to imposing legal limits. Respecting these limits, implies being able to calculate easily and accurately the generated magnetic field also in complex configurations. Twisting of phase conductors is such a case. The consolidated exact and approximated theory regarding a single-circuit twisted three-phase power cable line has been reported along with the proposal of an innovative simplified formula obtained by means of an heuristic procedure. This formula, although being dramatically simpler, is proven to be a good approximation of the analytical formula and at the same time much more accurate than the approximated formula found in literature. The double-circuit twisted three-phase power cable line case has been studied following different approaches of increasing complexity and accuracy. In this framework, the effectiveness of the above-mentioned innovative formula is also examined. The experimental verification of the correctness of the twisted double-circuit theoretical analysis has permitted its extension to multiple-circuit twisted three-phase power cable lines. In addition, appropriate 2D and, in particularly, 3D numerical codes for simulating real existing overhead power lines for the calculation of the magnetic field in their vicinity have been created. Finally, an innovative ‘smart’ measurement and evaluation system of the magnetic field is being proposed, described and validated, which deals with the experimentally-based evaluation of the total magnetic field B generated by multiple sources in complex three-dimensional arrangements, carried out on the basis of the measurement of the three Cartesian field components and their correlation with the field currents via multilinear regression techniques. The ultimate goal is verifying that magnetic induction intensity is within the prescribed limits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most precisely measured quantities in particle physics is the magnetic moment of the muon, which describes its coupling to an external magnetic field. It is expressed in form of the anomalous magnetic moment of the muon a_mu=(g_mu-2)/2 and has been determined experimentally with a precision of 0.5 parts per million. The current direct measurement and the theoretical prediction of the standard model differ by more than 3.5 standard deviations. Concerning theory, the contribution of the QED and weak interaction to a_mu can be calculated with very high precision in a perturbative approach.rnAt low energies, however, perturbation theory cannot be used to determine the hadronic contribution a^had_mu. On the other hand, a^had_mu may be derived via a dispersion relation from the sum of measured cross sections of exclusive hadronic reactions. Decreasing the experimental uncertainty on these hadronic cross sections is of utmost importance for an improved standard model prediction of a_mu.rnrnIn addition to traditional energy scan experiments, the method of Initial State Radiation (ISR) is used to measure hadronic cross sections. This approach allows experiments at colliders running at a fixed centre-of-mass energy to access smaller effective energies by studying events which contain a high-energetic photon emitted from the initial electron or positron. Using the technique of ISR, the energy range from threshold up to 4.5GeV can be accessed at Babar.rnrnThe cross section e+e- -> pi+pi- contributes with approximately 70% to the hadronic part of the anomalous magnetic moment of the muon a_mu^had. This important channel has been measured with a precision of better than 1%. Therefore, the leading contribution to the uncertainty of a_mu^had at present stems from the invariant mass region between 1GeV and 2GeV. In this energy range, the channels e+e- -> pi+pi-pi+pi- and e+e- -> pi+pi-pi0pi0 dominate the inclusive hadronic cross section. The measurement of the process e+e- -> pi+pi-pi+pi- will be presented in this thesis. This channel has been previously measured by Babar based on 25% of the total dataset. The new analysis includes a more detailed study of the background contamination from other ISR and non-radiative background reactions. In addition, sophisticated studies of the track reconstruction as well as the photon efficiency difference between the data and the simulation of the Babar detector are performed. With these auxiliary studies, a reduction of the systematic uncertainty from 5.0% to 2.4% in the peak region was achieved.rnrnThe pi+pi-pi+pi- final state has a rich internal structure. Hints are seen for the intermediate states rho(770)^0 f_2(1270), rho(770)^0 f_0(980), as well as a_1(1260)pi. In addition, the branching ratios BR(jpsi -> pi+pi-pi+pi-) and BR(psitwos -> jpsi pi+pi-) are extracted.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die kollineare Laserspektroskopie hat sich in den vergangenen drei Jahrzehnten zur Bestimmung der Kernladungsradien mittelschwerer und schwerer kurzlebiger Atomkerne in ausgezeichneter Weise bewährt. Auf die Isotope sehr leichter Elemente konnte sie allerdings erst kürzlich erweitert werden. Dieser Bereich der Nuklidkarte ist von besonderem Interesse, denn die ersten ab-initio Modelle der Kernphysik, die den Aufbau eines Atomkerns basierend auf individuellen Nukleonen und realistischenWechselwirkungspotentialen beschreiben, sind gegenwärtig nur für die leichtesten Elemente anwendbar. Außerdem existiertrnin dieser Region eine besonders exotische Form von Atomkernen, die sogenanntenrnHalokerne. Die Isotopenkette der Berylliumisotope zeichnet sich durch das Auftreten des Ein-Neutronen Halokerns 11Be und des Zwei- oder Vier-Neutronen-Halos 14Be aus. Dem Isotop 12Be kommt durch seine Position zwischen diesen beiden Exoten und den im Schalenmodell erwarteten magischen Schalenabschluss N = 8 eine besondere Bedeutung zu.rnIm Rahmen dieser Arbeit wurden mehrere frequenzstabilisierte Lasersysteme für die kollineare Laserspektroskopie aufgebaut. An TRIGA-SPEC stehen nun unter anderem ein frequenzverdoppeltes Diodenlasersystem mit Trapezverstärker und frequenzkammstabilisierter Titan-Saphirlaser mit Frequenzverdopplungsstufe für die Spektroskopie an refraktären Elementen oberhalb von Molybdän zur Verfügung, die für erste Testexperimente eingesetzt wurden. Außerdem wurde die effiziente Frequenzvervierfachung eines Titan-Saphirlasers demonstriert. An ISOLDE/CERN wurde ein frequenzkammstabilisierter und ein jodstabilisierter Farbstofflaser installiert und für die Laserspektroskopie an 9,10,11,12Be eingesetzt. Durch das verbesserte Lasersystem und den Einsatz eines verzögerten Koinzidenznachweises für Photonen und Ionen gelang es die Empfindlichkeitrnder Berylliumspektroskopie um mehr als zwei Größenordnungen zu steigern und damit die früheren Messungen an 7−11Be erstmals auf das Isotop 12Be auszuweiten. Außerdem wurde die Genauigkeit der absoluten Übergangsfrequenzen und der Isotopieverschiebungen der Isotope 9,10,11Be signifikant verbessert.rnDurch den Vergleich mit Ergebnissen des Fermionic Molecular Dynamics Modells kann der Trend der Ladungsradien der leichteren Isotope durch die ausgeprägte Clusterstruktur der Berylliumkerne erklärt werden. Für 12Be wird ersichtlich, dass der Grundzustand durch eine (sd)2 Konfiguration statt der vom Schalenmodell erwarteten p2 Konfiguration dominiert wird. Dies ist ein klares Indiz für das bereits zuvor beobachtete Verschwinden des N = 8 Schalenabschlusses bei 12Be.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Worldwide, diarrheal diseases rank second among conditions that afflict children. Despite the disease burden, there is limited consensus on how to define and measure pediatric acute diarrhea in trials. OBJECTIVES: In RCTs of children involving acute diarrhea as the primary outcome, we documented (1) how acute diarrhea and its resolution were defined, (2) all primary outcomes, (3) the psychometric properties of instruments used to measure acute diarrhea and (4) the methodologic quality of included trials, as reported. METHODS: We searched CENTRAL, Embase, Global Health, and Medline from inception to February 2009. English-language RCTs of children younger than 19 years that measured acute diarrhea as a primary outcome were chosen. RESULTS: We identified 138 RCTs reporting on 1 or more primary outcomes related to pediatric acute diarrhea/diseases. Included trials used 64 unique definitions of diarrhea, 69 unique definitions of diarrhea resolution, and 46 unique primary outcomes. The majority of included trials evaluated short-term clinical disease activity (incidence and duration of diarrhea), laboratory outcomes, or a composite of these end points. Thirty-two trials used instruments (eg, single and multidomain scoring systems) to support assessment of disease activity. Of these, 3 trials stated that their instrument was valid; however, none of the trials (or their citations) reported evidence of this validity. The overall methodologic quality of included trials was good. CONCLUSIONS: Even in what would be considered methodologically sound clinical trials, definitions of diarrhea, primary outcomes, and instruments employed in RCTs of pediatric acute diarrhea are heterogeneous, lack evidence of validity, and focus on indices that may not be important to participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Computer-based feedback systems for assessing the quality of cardiopulmonary resuscitation (CPR) are widely used these days. Recordings usually involve compression and ventilation dependent variables. Thorax compression depth, sufficient decompression and correct hand position are displayed but interpreted independently of one another. We aimed to generate a parameter, which represents all the combined relevant parameters of compression to provide a rapid assessment of the quality of chest compression-the effective compression ratio (ECR). METHODS: The following parameters were used to determine the ECR: compression depth, correct hand position, correct decompression and the proportion of time used for chest compressions compared to the total time spent on CPR. Based on the ERC guidelines, we calculated that guideline compliant CPR (30:2) has a minimum ECR of 0.79. To calculate the ECR, we expanded the previously described software solution. In order to demonstrate the usefulness of the new ECR-parameter, we first performed a PubMed search for studies that included correct compression and no-flow time, after which we calculated the new parameter, the ECR. RESULTS: The PubMed search revealed 9 trials. Calculated ECR values ranged between 0.03 (for basic life support [BLS] study, two helpers, no feedback) and 0.67 (BLS with feedback from the 6th minute). CONCLUSION: ECR enables rapid, meaningful assessment of CPR and simplifies the comparability of studies as well as the individual performance of trainees. The structure of the software solution allows it to be easily adapted to any manikin, CPR feedback devices and different resuscitation guidelines (e.g. ILCOR, ERC).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studying liquid fuel combustion is necessary to better design combustion systems. Through more efficient combustors and alternative fuels, it is possible to reduce greenhouse gases and harmful emissions. In particular, coal-derived and Fischer-Tropsch liquid fuels are of interest because, in addition to producing fewer emissions, they have the potential to drastically reduce the United States' dependence on foreign oil. Major academic research institutions like the Pennsylvania State University perform cutting-edge research in many areas of combustion. The Combustion Research Laboratory (CRL) at Bucknell University is striving to develop the necessary equipment to be capable of both independent and collaborative research efforts with Penn State and in the process, advance the CRL to the forefront of combustion studies. The focus of this thesis is to advance the capabilities of the Combustion Research Lab at Bucknell. Specifically, this was accomplished through a revision to a previously designed liquid fuel injector, and through the design and installation of a laser extinction system for the measurement of soot produced during combustion. The previous liquid fuel injector with a 0.005" hole did not behave as expected. Through spray testing the 0.005" injector with water, it was determined that experimental errors were made in the original pressure testing of the injector. Using data from the spray testing experiment, new theoretical hole sizes of the injector were calculated. New injectors with 0.007" and 0.0085" orifices were fabricated and subsequently tested to qualitatively validate their behavior. The injectors were installed in the combustion rig in the CRL and hot-fire tested with liquid heptane. The 0.0085" injector yielded a manageable fuel pressure and produced a broad flame. A laser extinction system was designed and installed in the CRL. This involved the fabrication of a number of custom-designed parts and the specification of laser extinction equipment for purchase. A standard operating procedure for the laser extinction system was developed to provide a consistent, safe method for measuring soot formation during combustion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Two noninvasive methods to measure dental implant stability are damping capacity assessment (Periotest) and resonance frequency analysis (Osstell). The objective of the present study was to assess the correlation of these 2 techniques in clinical use. MATERIALS AND METHODS: Implant stability of 213 clinically stable loaded and unloaded 1-stage implants in 65 patients was measured in triplicate by means of resonance frequency analysis and Periotest. Descriptive statistics as well as Pearson's, Spearman's, and intraclass correlation coefficients were calculated with SPSS 11.0.2. RESULTS: The mean values were 57.66 +/- 8.19 implant stability quotient for the resonance frequency analysis and -5.08 +/- 2.02 for the Periotest. The correlation of both measuring techniques was -0.64 (Pearson) and -0.65 (Spearman). The single-measure intraclass correlation coefficients for the ISQ and Periotest values were 0.99 and 0.88, respectively (95% CI). No significant correlation of implant length with either resonance frequency analysis or Periotest could be found. However, a significant correlation of implant diameter with both techniques was found (P < .005). The correlation of both measuring systems is moderate to good. It seems that the Periotest is more susceptible to clinical measurement variables than the Osstell device. The intraclass correlation indicated lower measurement precision for the Periotest technique. Additionally, the Periotest values differed more from the normal (Gaussian) curve of distribution than the ISQs. Both measurement techniques show a significant correlation to the implant diameter. CONCLUSION: Resonance frequency analysis appeared to be the more precise technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: We determined and compared urethral pressure measurements using air charged and microtip catheters in a prospective, single-blind, randomized trial. MATERIALS AND METHODS: A consecutive series of 64 women referred for urodynamic investigation underwent sequential urethral pressure measurements using an air charged and a microtip catheter in randomized order. Patients were blinded to the type and sequence of catheter used. Agreement between the 2 catheter systems was assessed using the Bland and Altman 95% limits of agreement method. RESULTS: Intraclass correlation coefficients of air charged and microtip catheters for maximum urethral closure pressure at rest were 0.97 and 0.93, and for functional profile length they were 0.9 and 0.78, respectively. Pearson's correlation coefficients and Lin's concordance coefficients of air charged and microtip catheters were r = 0.82 and rho = 0.79 for maximum urethral closure pressure at rest, and r = 0.73 and rho = 0.7 for functional profile length, respectively. When applying the Bland and Altman method, air charged catheters gave higher readings than microtip catheters for maximum urethral closure pressure at rest (mean difference 7.5 cm H(2)O) and functional profile length (mean difference 1.8 mm). There were wide 95% limits of agreement for differences in maximum urethral closure pressure at rest (-24.1 to 39 cm H(2)O) and functional profile length (-7.7 to 11.3 mm). CONCLUSIONS: For urethral pressure measurement the air charged catheter is at least as reliable as the microtip catheter and it generally gives higher readings. However, air charged and microtip catheters cannot be used interchangeably for clinical purposes because of insufficient agreement. Hence, clinicians should be aware that air charged and microtip catheters may yield completely different results, and these differences should be acknowledged during clinical decision making.