909 resultados para measurement and metrology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although atypical social behaviour remains a key characterisation of ASD, the presence ofsensory and perceptual abnormalities has been given a more central role in recentclassification changes. An understanding of the origins of such aberrations could thus prove afruitful focus for ASD research. Early neurocognitive models of ASD suggested that thestudy of high frequency activity in the brain as a measure of cortical connectivity mightprovide the key to understanding the neural correlates of sensory and perceptual deviations inASD. As our review shows, the findings from subsequent research have been inconsistent,with a lack of agreement about the nature of any high frequency disturbances in ASD brains.Based on the application of new techniques using more sophisticated measures of brainsynchronisation, direction of information flow, and invoking the coupling between high andlow frequency bands, we propose a framework which could reconcile apparently conflictingfindings in this area and would be consistent both with emerging neurocognitive models ofautism and with the heterogeneity of the condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study conceptualised and measured children’s well-being in Ireland and considered how such conceptualisations and approaches to the measurement of well-being might inform social policy for children and families living in Ireland. This research explored what is meant by children’s well-being and how it can be conceptualised and measured so as to reflect the multi-dimensionality of the concept. The study developed an index of well-being that was both theoretically and methodologically robust and could be meaningfully used to inform social policy developments for children and their families. For the first time, an index of well-being for children was developed using an explicitly articulated unifying theory of children’s well-being. Moreover, for the first time an index of wellbeing was developed for 13-year old children living in Ireland using data from Wave 2 of the national longitudinal study of children. The Structural Model of Child Well-being (SMCW), the theoretical framework that underpins the development of this study’s index, offers a comprehensive understanding of well-being. The SMCW builds on, and integrates, a range of already-established theories concerning children’s development, their agency, rights and capabilities into a unifying theory that explains well-being in its entirety. This conceptualisation of well-being moves beyond the narrow focus on child development adopted in some recent studies of children’s well-being and which perpetuate individualised and self-responsibilising conceptualisations of well-being. This study found that the SMCW can be meaningfully applied, both theoretically and operationally, to the construction of an index of well-being for children. While it was not the purpose of this study to validate the SMCW, in the process of developing the index, I concluded that there was a theoretical ‘fit’ between the conceptual orientation of the SMCW and the wider children’s well-being literature. The ‘nested’ structure of the SMCW facilitated the identification of domains, sub-domains and indicators of well-being reflecting typical conventions of index construction. The findings from the resulting index, in both its categorical and continuous forms, demonstrated how a comprehensive theory of well-being can be used to illustrate how children are faring and which children are experiencing poorer or better well-being. Furthermore, this study demonstrated how the SMCW and the resultant index can be meaningfully used to support the implementation and review of the national policy framework for children and young people in Ireland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most liquid electrolytes used in commercial lithium-ion batteries are composed by alkylcarbonate mixture containing lithium salt. The decomposition of these solvents by oxidation or reduction during cycling of the cell, induce generation of gases (CO2, CH4, C2H4, CO …) increasing of pressure in the sealed cell, which causes a safety problem [1]. The prior understanding of parameters, such as structure and nature of salt, temperature pressure, concentration, salting effects and solvation parameters, which influence gas solubility and vapor pressure of electrolytes is required to formulate safer and suitable electrolytes especially at high temperature.

We present in this work the CO2, CH4, C2H4, CO solubility in different pure alkyl-carbonate solvents (PC, DMC, EMC, DEC) and their binary or ternary mixtures as well as the effect of temperature and lithium salt LiX (X = LiPF6, LiTFSI or LiFAP) structure and concentration on these properties. Furthermore, in order to understand parameters that influence the choice of the structure of the solvents and their ability to dissolve gas through the addition of a salt, we firstly analyzed experimentally the transport properties (Self diffusion coefficient (D), fluidity (h-1), and conductivity (s) and lithium transport number (tLi) using the Stock-Einstein, and extended Jones-Dole equations [2]. Furthermore, measured data for the of CO2, C2H4, CH4 and CO solubility in pure alkylcarbonates and their mixtures containing LiPF6; LiFAP; LiTFSI salt, are reported as a function of temperature and concentration in salt. Based on experimental solubility data, the Henry’s law constant of gases in these solvents and electrolytes was then deduced and compared with values predicted by using COSMO-RS methodology within COSMOthermX software. From these results, the molar thermodynamic functions of dissolution such as the standard Gibbs energy, the enthalpy, and the entropy, as well as the mixing enthalpy of the solvents and electrolytes with the gases in its hypothetical liquid state were calculated and discussed [3]. Finally, the analysis of the CO2 solubility variations with the salt addition was then evaluated by determining specific ion parameters Hi by using the Setchenov coefficients in solution. This study showed that the gas solubility is entropy driven and can been influenced by the shape, charge density, and size of the anions in lithium salt.

References

[1] S.A. Freunberger, Y. Chen, Z. Peng, J.M. Griffin, L.J. Hardwick, F. Bardé, P. Novák, P.G. Bruce, Journal of the American Chemical Society 133 (2011) 8040-8047.

[2] P. Porion, Y.R. Dougassa, C. Tessier, L. El Ouatani, J. Jacquemin, M. Anouti, Electrochimica Acta 114 (2013) 95-104.

[3] Y.R. Dougassa, C. Tessier, L. El Ouatani, M. Anouti, J. Jacquemin, The Journal of Chemical Thermodynamics 61 (2013) 32-44.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental and numerical study of turbulent fire suppression is presented. For this work, a novel and canonical facility has been developed, featuring a buoyant, turbulent, methane or propane-fueled diffusion flame suppressed via either nitrogen dilution of the oxidizer or application of a fine water mist. Flames are stabilized on a slot burner surrounded by a co-flowing oxidizer, which allows controlled delivery of either suppressant to achieve a range of conditions from complete combustion through partial and total flame quenching. A minimal supply of pure oxygen is optionally applied along the burner to provide a strengthened flame base that resists liftoff extinction and permits the study of substantially weakened turbulent flames. The carefully designed facility features well-characterized inlet and boundary conditions that are especially amenable to numerical simulation. Non-intrusive diagnostics provide detailed measurements of suppression behavior, yielding insight into the governing suppression processes, and aiding the development and validation of advanced suppression models. Diagnostics include oxidizer composition analysis to determine suppression potential, flame imaging to quantify visible flame structure, luminous and radiative emissions measurements to assess sooting propensity and heat losses, and species-based calorimetry to evaluate global heat release and combustion efficiency. The studied flames experience notable suppression effects, including transition in color from bright yellow to dim blue, expansion in flame height and structural intermittency, and reduction in radiative heat emissions. Still, measurements indicate that the combustion efficiency remains close to unity, and only near the extinction limit do the flames experience an abrupt transition from nearly complete combustion to total extinguishment. Measurements are compared with large eddy simulation results obtained using the Fire Dynamics Simulator, an open-source computational fluid dynamics software package. Comparisons of experimental and simulated results are used to evaluate the performance of available models in predicting fire suppression. Simulations in the present configuration highlight the issue of spurious reignition that is permitted by the classical eddy-dissipation concept for modeling turbulent combustion. To address this issue, simple treatments to prevent spurious reignition are developed and implemented. Simulations incorporating these treatments are shown to produce excellent agreement with the experimentally measured data, including the global combustion efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado, Engenharia Electrónica e Telecomunicações, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2011

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement and modeling techniques were developed to improve over-water gaseous air-water exchange measurements for persistent bioaccumulative and toxic chemicals (PBTs). Analytical methods were applied to atmospheric measurements of hexachlorobenzene (HCB), polychlorinated biphenyls (PCBs), and polybrominated diphenyl ethers (PBDEs). Additionally, the sampling and analytical methods are well suited to study semivolatile organic compounds (SOCs) in air with applications related to secondary organic aerosol formation, urban, and indoor air quality. A novel gas-phase cleanup method is described for use with thermal desorption methods for analysis of atmospheric SOCs using multicapillary denuders. The cleanup selectively removed hydrogen-bonding chemicals from samples, including much of the background matrix of oxidized organic compounds in ambient air, and thereby improved precision and method detection limits for nonpolar analytes. A model is presented that predicts gas collection efficiency and particle collection artifact for SOCs in multicapillary denuders using polydimethylsiloxane (PDMS) sorbent. An approach is presented to estimate the equilibrium PDMS-gas partition coefficient (Kpdms) from an Abraham solvation parameter model for any SOC. A high flow rate (300 L min-1) multicapillary denuder was designed for measurement of trace atmospheric SOCs. Overall method precision and detection limits were determined using field duplicates and compared to the conventional high-volume sampler method. The high-flow denuder is an alternative to high-volume or passive samplers when separation of gas and particle-associated SOCs upstream of a filter and short sample collection time are advantageous. A Lagrangian internal boundary layer transport exchange (IBLTE) Model is described. The model predicts the near-surface variation in several quantities with fetch in coastal, offshore flow: 1) modification in potential temperature and gas mixing ratio, 2) surface fluxes of sensible heat, water vapor, and trace gases using the NOAA COARE Bulk Algorithm and Gas Transfer Model, 3) vertical gradients in potential temperature and mixing ratio. The model was applied to interpret micrometeorological measurements of air-water exchange flux of HCB and several PCB congeners in Lake Superior. The IBLTE Model can be applied to any scalar, including water vapor, carbon dioxide, dimethyl sulfide, and other scalar quantities of interest with respect to hydrology, climate, and ecosystem science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing of the distributed generation, DC microgrids have become more and more common in the electrical network. To connect devices in a microgrid, converter are necessary, but they are also source of disturbances due to their functioning. In this thesis, measurement and simulation of conducted emissions, within the frequency range 2-150kHz, of a DC/DC buck converter are studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate tomographic imaging of the refractive index of turbid media using bifocal optical coherence refractometry (BOCR). The technique, which is a variant of optical coherence tomography, is based on the measurement of the optical pathlength difference between two foci simultaneously present in a medium of interest. We describe a new method to axially shift the bifocal optical pathlength that avoids the need to physically relocate the objective lens or the sample during an axial scan, and present an experimental realization based on an adaptive liquid-crystal lens. We present experimental results, including video clips, which demonstrate refractive index tomography of a range of turbid liquid phantoms, as well as of human skin in vivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic-heterodyne demodulation is a useful technique for dynamic displacement and velocity detection in interferometric sensors, as it can provide an output signal that is immune to interferometric drift. With the advent of cost-effective, high-speed real-time signal-processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. In synthetic heterodyne, to obtain the actual dynamic displacement or vibration of the object under test requires knowledge of the interferometer visibility and also the argument of two Bessel functions. In this paper, a method is described for determining the former and setting the Bessel function argument to a set value, which ensures maximum sensitivity. Conventional synthetic-heterodyne demodulation requires the use of two in-phase local oscillators; however, the relative phase of these oscillators relative to the interferometric signal is unknown. It is shown that, by using two additional quadrature local oscillators, a demodulated signal can be obtained that is independent of this phase difference. The experimental interferometer is aMichelson configuration using a visible single-mode laser, whose current is sinusoidally modulated at a frequency of 20 kHz. The detected interferometer output is acquired using a 250 kHz analog-to-digital converter and processed in real time. The system is used to measure the displacement sensitivity frequency response and linearity of a piezoelectric mirror shifter over a range of 500 Hz to 10 kHz. The experimental results show good agreement with two data-obtained independent techniques: the signal coincidence and denominated n-commuted Pernick method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal effects in uncontrolled factory environments are often the largest source of uncertainty in large volume dimensional metrology. As the standard temperature for metrology of 20°C cannot be achieved practically or economically in many manufacturing facilities, the characterisation and modelling of temperature offers a solution for improving the uncertainty of dimensional measurement and quantifying thermal variability in large assemblies. Technologies that currently exist for temperature measurement in the range of 0-50°C have been presented alongside discussion of these temperature measurement technologies' usefulness for monitoring temperatures in a manufacturing context. Particular aspects of production where the technology could play a role are highlighted as well as practical considerations for deployment. Contact sensors such as platinum resistance thermometers can produce accuracy closest to the desired accuracy given the most challenging measurement conditions calculated to be ∼0.02°C. Non-contact solutions would be most practical in the light controlled factory (LCF) and semi-invasive appear least useful but all technologies can play some role during the initial development of thermal variability models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes how to integrate audience measurement and site visibility as the main research approaches in outdoor advertising research in a single concept. Details are portrayed on how GPS is used on a large scale in Switzerland for mobility analysis and audience measurement. Furthermore, the development of a software solution is introduced that allows the integration of all mobility data and poster location information. Finally a model and its results is presented for the calculation of coverage of individual poster campaigns and for the calculation of the number of contacts generated by each billboard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Measurement of plasma renin is important for the clinical assessment of hypertensive patients. The most common methods for measuring plasma renin are the plasma renin activity (PRA) assay and the renin immunoassay. The clinical application of renin inhibitor therapy has thrown into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information. Whereas activity assays measure only active renin, immunoassays measure both active and inhibited renin. Particular care must be taken in the collection and processing of blood samples and in the performance of these assays to avoid errors in renin measurement. Both activity assays and immunoassays are susceptible to renin overestimation due to prorenin activation. In addition, activity assays performed with peptidase inhibitors may overestimate the degree of inhibition of PRA by renin inhibitor therapy. Moreover, immunoassays may overestimate the reactive increase in plasma renin concentration in response to renin inhibitor therapy, owing to the inhibitor promoting conversion of prorenin to an open conformation that is recognized by renin immunoassays. CONCLUSIONS: The successful application of renin assays to patient care requires that the clinician and the clinical chemist understand the information provided by these assays and of the precautions necessary to ensure their accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although brand authenticity is gaining increasing interest in consumer behavior research and managerial practice, literature on its measurement and contribution to branding theory is still limited. This article develops an integrative framework of the concept of brand authenticity and reports the development and validation of a scale measuring consumers' perceived brand authenticity (PBA). A multi-phase scale development process resulted in a 15-item PBA scale measuring four dimensions: credibility, integrity, symbolism, and continuity. This scale is reliable across different brands and cultural contexts. We find that brand authenticity perceptions are influenced by indexical, existential, and iconic cues, whereby some of the latters' influence is moderated by consumers' level of marketing skepticism. Results also suggest that PBA increases emotional brand attachment and word-of-mouth, and that it drives brand choice likelihood through self-congruence for consumers high in self-authenticity.