976 resultados para Fundamental Parameter Method
Resumo:
X-ray fluorescence (XRF) is a fast, low-cost, nondestructive, and truly multielement analytical technique. The objectives of this study are to quantify the amount of Na(+) and K(+) in samples of table salt (refined, marine, and light) and to compare three different methodologies of quantification using XRF. A fundamental parameter method revealed difficulties in quantifying accurately lighter elements (Z < 22). A univariate methodology based on peak area calibration is an attractive alternative, even though additional steps of data manipulation might consume some time. Quantifications were performed with good correlations for both Na (r = 0.974) and K (r = 0.992). A partial least-squares (PLS) regression method with five latent variables was very fast. Na(+) quantifications provided calibration errors lower than 16% and a correlation of 0.995. Of great concern was the observation of high Na(+) levels in low-sodium salts. The presented application may be performed in a fast and multielement fashion, in accordance with Green Chemistry specifications.
Resumo:
One of the main goals of CoRoT Natal Team is the determination of rotation period for thousand of stars, a fundamental parameter for the study of stellar evolutionary histories. In order to estimate the rotation period of stars and to understand the associated uncertainties resulting, for example, from discontinuities in the curves and (or) low signal-to-noise ratio, we have compared three different methods for light curves treatment. These methods were applied to many light curves with different characteristics. First, a Visual Analysis was undertaken for each light curve, giving a general perspective on the different phenomena reflected in the curves. The results obtained by this method regarding the rotation period of the star, the presence of spots, or the star nature (binary system or other) were then compared with those obtained by two accurate methods: the CLEANest method, based on the DCDFT (Date Compensated Discrete Fourier Transform), and the Wavelet method, based on the Wavelet Transform. Our results show that all three methods have similar levels of accuracy and can complement each other. Nevertheless, the Wavelet method gives more information about the star, from the wavelet map, showing the variations of frequencies over time in the signal. Finally, we discuss the limitations of these methods, the efficiency to give us informations about the star and the development of tools to integrate different methods into a single analysis
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
AMS subject classification: Primary 49N25, Secondary 49J24, 49J25.
Resumo:
Magnetic resonance imaging (MRI) was used to evaluate and compare with anthropometry a fundamental bioelectrical impedance analysis (BIA) method for predicting muscle and adipose tissue composition in the lower limb. Healthy volunteers (eight men and eight women), aged 41 to 62 years, with mean (S.D.) body mass indices of 28.6 (5.4) kg/m(2) and 25.1 (5.4) kg/m(2) respectively, were subjected to MRI leg scans, from which 20-cm sections of thigh and IO-cm sections of lower leg (calf) were analysed for muscle and adipose tissue content, using specifically developed software. Muscle and adipose tissue were also predicted from anthropometric measurements of circumferences and skinfold thicknesses, and by use of fundamental BIA equations involving section impedance at 50 kHz and tissue-specific resistivities. Anthropometric assessments of circumferences, cross-sectional areas and volumes for total constituent tissues matched closely MRI estimates. Muscle volume was substantially overestimated (bias: thigh, -40%; calf, -18%) and adipose tissue underestimated (bias: thigh, 43%; calf, 8%) by anthropometry, in contrast to generally better predictions by the fundamental BIA approach for muscle (bias:thigh, -12%; calf, 5%) and adipose tissue (bias:thigh, 17%; calf, -28%). However, both methods demonstrated considerable individual variability (95% limits of agreement 20-77%). In general, there was similar reproducibility for anthropometric and fundamental BIA methods in the thigh (inter-observer residual coefficient of variation for muscle 3.5% versus 3.8%), but the latter was better in the calf (inter-observer residual coefficient of variation for muscle 8.2% versus 4.5%). This study suggests that the fundamental BIA method has advantages over anthropometry for measuring lower limb tissue composition in healthy individuals.
Resumo:
Ultrasonometry seems to have a future for the evaluation of fracture healing. Ultrasound propagation velocity (USPV) significantly decreases at the same time that bone diameter decreases as healing takes place, thus approaching normal values. In this investigation, both USPV and broadband ultrasound attenuation (BUA) were measured using a model of a transverse mid-diaphyseal osteotomy of sheep tibiae. Twenty-one sheep were operated and divided into three groups of seven, according to the follow-up period of 30, 60, and 90 days, respectively. The progress of healing of the osteotomy was checked with monthly conventional radiographs. The animals were killed at the end of the period of observation of each group, both operated-upon and intact tibiae being resected and submitted to the measurement of underwater transverse and direct contact transverse and longitudinal USPV and BUA at the osteotomy site. The intact left tibia of the 21 animals was used for control, being examined on a symmetrical diaphyseal segment. USPV increased while BUA decreased with the progression of healing, with significant differences between the operated and untouched tibiae and between the periods of observation, for most of the comparisons. There was a strong negative correlation between USPV and BUA. Both USPV and BUA directly reflect and can help predict the healing of fractures, but USPV alone can be used as a fundamental parameter. Ultrasonometry may be of use in clinical application to humans provided adequate adaptations can be developed. (C) 2010 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 29:444-451, 2011
Resumo:
Dissertation presented to obtain the Ph.D degree in Biology
Resumo:
Tämän tutkimuksen tarkoituksena oli tutkia Päijät-Hämeen sosiaali- ja terveydenhuollon kuntayhtymän Tukipalvelukeskuksen teknisten palveluiden tulosalueen tuotteistamista ja tuottaa kustannusten laskutuksen periaatteellinen toteutusmalli. Työssä otetaan kantaa palveluiden hinnoittelumalleihin eli kuinka kiinteistöistä ja teknisistä ylläpitopalveluista syntyvät kustannukset voidaan osoittaa niitä käyttäville tulosalueille. Palveluiden jäsentäminen ja suoritteiden yhdistäminen muodostavat palvelutuotteen, jota palveluita käyttävälle asiakkaalle tarjotaan. Tuotekäsitteelle on tyypillistä, että sitä voidaan tuottaa saman sisältöisenä nyt ja tulevaisuudessa. Tuotteita tarjoavan palveluiden tuottajan tuotekriteerit sisältävät selkeät tuoteryhmät, tuotteiden määrällisen mitattavuuden, kuvaavat tuotenimikkeet ja tuotteilla on ostajan näkökulmasta selkeä sisältö ja hinnoittelu. Tuotteistamisen tärkein hyöty on organisaation kustannusrakenteen selkeytyminen, jonka vuoksi tuotteiden määrä, laatu ja hinta tulee olla määriteltävissä. Asiakkaalle voidaan tarjota valmiiksi hinnoiteltuja palveluosia, jolloin palveluiden räätälöinti ja modulointi ovat mahdollisia palveluiden niputtamisella. Ei-kaupallisten instanssien, kuten kuntayhtymien sisäinen laskutus ja tuotteistus mahdollistavat palveluiden standardoimisen ja johdon päätöksenteko toimialakohtaisen kustannustietoisuuden vuoksi paranee. Tutkimuksen alkuosassa paneudutaan palvelun alle kuuluviin käsitteisiin, kuten tuotteistamisajatteluun, palvelupaketteihin, palveluiden laadun mittaamiseen ja palveluiden hinnoitteluun. Lisäksi tutkimuksessa käsitellään teknisten palveluiden hinnoitteluun käytettävän toimintoperusteisen kustannuslaskennan sekä sisäisen vuokraamisen periaatteita. Teoreettisessa viitekehyksessä käsiteltyjä asioita käytettiin hyödyksi sairaanhoitopiireille laadittuun kyselytutkimukseen. Yhdessä teorian ja kyselytutkimuksen kanssa kohdeorganisaatiolle luotiin malli palveluiden tuotteistamiseksi.
Resumo:
Hydrogen stratification and atmosphere mixing is a very important phenomenon in nuclear reactor containments when severe accidents are studied and simulated. Hydrogen generation, distribution and accumulation in certain parts of containment may pose a great risk to pressure increase induced by hydrogen combustion, and thus, challenge the integrity of NPP containment. The accurate prediction of hydrogen distribution is important with respect to the safety design of a NPP. Modelling methods typically used for containment analyses include both lumped parameter and field codes. The lumped parameter method is universally used in the containment codes, because its versatility, flexibility and simplicity. The lumped parameter method allows fast, full-scale simulations, where different containment geometries with relevant engineering safety features can be modelled. Lumped parameter gas stratification and mixing modelling methods are presented and discussed in this master’s thesis. Experimental research is widely used in containment analyses. The HM-2 experiment related to hydrogen stratification and mixing conducted at the THAI facility in Germany is calculated with the APROS lump parameter containment package and the APROS 6-equation thermal hydraulic model. The main purpose was to study, whether the convection term included in the momentum conservation equation of the 6-equation modelling gives some remarkable advantages compared to the simplified lumped parameter approach. Finally, a simple containment test case (high steam release to a narrow steam generator room inside a large dry containment) was calculated with both APROS models. In this case, the aim was to determine the extreme containment conditions, where the effect of convection term was supposed to be possibly high. Calculation results showed that both the APROS containment and the 6-equation model could model the hydrogen stratification in the THAI test well, if the vertical nodalisation was dense enough. However, in more complicated cases, the numerical diffusion may distort the results. Calculation of light gas stratification could be probably improved by applying the second order discretisation scheme for the modelling of gas flows. If the gas flows are relatively high, the convection term of the momentum equation is necessary to model the pressure differences between the adjacent nodes reasonably.
Resumo:
La tesi doctoral presentada té com a objectius principals l'estudi de les etapes fonamentals de desintegració i flotació en un procés de destintatge de papers vell de qualitats elevades per a poder millorar l'eficàcia d'aquestes etapes clau. Conté una revisió teòrica completa i molt actualitzada del procés de desintegració i flotació tant a nivell macroscòpic com microscòpic. La metodologia de treball en el laboratori, la posada a punt dels aparells, així com les anàlisis efectuades per a valorar la resposta del procés (anàlisi de blancor, anàlisi d'imatge i anàlisi de la concentració efectiva de tinta residual) estan descrites en el capítol de material i mètodes. La posada en marxa permet obtenir unes primeres conclusions respecte la necessitat de treballar amb una matèria primera homogènia i respecte la no significació de la temperatura de desintegració dins l'interval de treball permès al laboratori (20-50°C). L'anàlisi de les variables mecàniques de desintegració: consistència de desintegració (c), velocitat d'agitació en la desintegració (N) i temps de desintegració (t), permet de discernir que la consistència de desintegració és una variable fonamental. El valor de consistència igual al 10% marca el límit d'existència de les forces d'impacte mecànic en la suspensió fibrosa. A consistències superiors, les forces viscoses i d'acceleració dominen l'etapa de desintegració. Existeix una interacció entre la consistència i el temps de desintegració, optimitzant-se aquesta darrera variable en funció del valor de la consistència. La velocitat d'agitació és significativa només per a valors de consistència de desintegració inferiors al 10%. En aquests casos, incrementar el valor de N de 800 a 1400 rpm representa una disminució de 14 punts en el factor de destintabilitat. L'anàlisi de les variables químiques de desintegració: concentració de silicat sòdic (% Na2SiO3), peròxid d'hidrogen (% H2O2) i hidròxid sòdic (% Na2OH), proporciona resultats força significatius. El silicat sòdic presenta un efecte altament dispersant corroborat per les corbes de distribució dels diàmetres de partícula de tinta obtingudes mitjançant anàlisi d'imatges. L'hidròxid sòdic també presenta un efecte dispersant tot i que no és tant important com el del silicat sòdic. Aquests efectes dispersants són deguts principalment a l'increment de les repulsions electrostàtiques que aporten a la suspensió fibrosa aquests reactius químics fent disminuir l'eficàcia d'eliminació de l'etapa de flotació. El peròxid d'hidrogen utilitzat generalment com agent blanquejant, actua en aquests casos com a neutralitzador dels grups hidroxil provinents tant del silicat sòdic com de l'hidròxid sòdic, disminuint la repulsió electrostàtica dins la suspensió. Amb l'anàlisi de les variables hidrodinàmiques de flotació: consistència de flotació (c), velocitat d'agitació durant la flotació (N) i cabal d'aire aplicat (q), s'aconsegueix la seva optimització dins el rang de treball permès al laboratori. Valors elevats tant de la velocitat d'agitació com del cabal d'aire aplicat durant la flotació permeten eliminar majors quantitats de tinta. La consistència de flotació assoleix valors òptims depenent de les condicions de flux dins la cel·la de flotació. Les metodologies d'anàlisi emprades permeten obtenir diferents factors de destintabilitat. Entre aquests factors existeix una correlació important (determinada pels coeficients de correlació de Pearson) que permet assegurar la utilització de la blancor com a paràmetre fonamental en l'anàlisi del destintatge de papers vells, sempre i quan es complementi amb anàlisis d'imatge o bé amb anàlisi de la concentració efectiva de tinta residual. S'aconsegueixen expressions empíriques tipus exponencial que relacionen aquests factors de destintabilitat amb les variables experimentals. L' estudi de les cinètiques de flotació permet calcular les constants cinètiques (kBlancor, kERIC, kSupimp) en funció de les variables experimentals, obtenint un model empíric de flotació que relacionant-lo amb els paràmetres microscòpics que afecten realment l'eliminació de partícules de tinta, deriva en un model fonamental molt més difícil d'interpretar. Mitjançant l'estudi d'aquestes cinètiques separades per classes, també s'aconsegueix determinar que l'eficàcia d'eliminació de partícules de tinta és màxima si el seu diàmetre equivalent és superior a 50 μm. Les partícules amb diàmetres equivalents inferiors a 15 μm no són eliminades en les condicions de flotació analitzades. Es pot dir que és físicament impossible eliminar partícules de tinta de diàmetres molt diferents amb la mateixa eficiència i sota les mateixes condicions de treball. El rendiment del procés analitzat en funció de l'eliminació de sòlids per l'etapa de flotació no ha presentat relacions significatives amb cap de les variables experimentals analitzades. Únicament es pot concloure que addicionar quantitats elevades de silicat sòdic provoca una disminució tant de sòlids com de matèria inorgànica presents en les escumes de flotació.
Resumo:
Increased atmospheric concentrations of carbon dioxide (CO2) will benefit the yield of most crops. Two free air CO2 enrichment (FACE) meta-analyses have shown increases in yield of between 0 and 73% for C3 crops. Despite this large range, few crop modelling studies quantify the uncertainty inherent in the parameterisation of crop growth and development. We present a novel perturbed-parameter method of crop model simulation, which uses some constraints from observations, that does this. The model used is the groundnut (i.e. peanut; Arachis hypogaea L.) version of the general large-area model for annual crops (GLAM). The conclusions are of relevance to C3 crops in general. The increases in yield simulated by GLAM for doubled CO2 were between 16 and 62%. The difference in mean percentage increase between well-watered and water-stressed simulations was 6.8. These results were compared to FACE and controlled environment studies, and to sensitivity tests on two other crop models of differing levels of complexity: CROPGRO, and the groundnut model of Hammer et al. [Hammer, G.L., Sinclair, T.R., Boote, K.J., Wright, G.C., Meinke, H., Bell, M.J., 1995. A peanut simulation model. I. Model development and testing. Agron. J. 87, 1085-1093]. The relationship between CO2 and water stress in the experiments and in the models was examined. From a physiological perspective, water-stressed crops are expected to show greater CO2 stimulation than well-watered crops. This expectation has been cited in literature. However, this result is not seen consistently in either the FACE studies or in the crop models. In contrast, leaf-level models of assimilation do consistently show this result. An analysis of the evidence from these models and from the data suggests that scale (canopy versus leaf), model calibration, and model complexity are factors in determining the sign and magnitude of the interaction between CO2 and water stress. We conclude from our study that the statement that 'water-stressed crops show greater CO2 stimulation than well-watered crops' cannot be held to be universally true. We also conclude, preliminarily, that the relationship between water stress and assimilation varies with scale. Accordingly, we provide some suggestions on how studies of a similar nature, using crop models of a range of complexity, could contribute further to understanding the roles of model calibration, model complexity and scale. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper identifies the major challenges in the area of pattern formation. The work is also motivated by the need for development of a single framework to surmount these challenges. A framework based on the control of macroscopic parameters is proposed. The issue of transformation of patterns is specifically considered. A definition for transformation and four special cases, namely elementary and geometrical transformations by repositioning all or some robots in the pattern are provided. Two feasible tools for pattern transformation namely, a macroscopic parameter method and a mathematical tool - Moebius transformation also known as the linear fractional transformation are introduced. The realization of the unifying framework considering planning and communication is reported.
Resumo:
The magnetometer is a key instrument to the Solar Orbiter mission. The magnetic field is a fundamental parameter in any plasma: a precise and accurate measurement of the field is essential for understanding almost all aspects of plasma dynamics such as shocks and stream-stream interactions. Many of Solar Orbiter’s mission goals are focussed around the link between the Sun and space. A combination of in situ measurements by the magnetometer, remote measurements of solar magnetic fields and global modelling is required to determine this link and hence how the Sun affects interplanetary space. The magnetic field is typically one of the most precisely measured plasma parameters and is therefore the most commonly used measurement for studies of waves, turbulence and other small scale phenomena. It is also related to the coronal magnetic field which cannot be measured directly. Accurate knowledge of the magnetic field is essential for the calculation of fundamental plasma parameters such as the plasma beta, Alfvén speed and gyroperiod. We describe here the objectives and context of magnetic field measurements on Solar Orbiter and an instrument that fulfils those objectives as defined by the scientific requirements for the mission.
Resumo:
The increasingly design requirements for modern engineering applications resulted in the development of new materials with improved mechanical properties. Low density, combined with excellent weight/strength ratio as well as corrosion resistance, make the titanium attractive for application in landing gears. Fatigue control is a fundamental parameter to be considered in the development of mechanical components. The aim of this research is to analyze the fatigue behavior of anodized Ti-6Al-4V alloy and the influence of shot peening pre treatment on the experimental data. Axial fatigue tests (R = 0.1) were performed, and a significant reduction in the fatigue strength of anodized Ti-6Al-4V was observed. The shot peening superficial treatment, which objective is to create a compressive residual stress field in the surface layers, showed efficiency to increase the fatigue life of anodized material. Experimental data were represented by S-N curves. Scanning electron microscopy technique (SEM) was used to observe crack origin sites.