938 resultados para Relative deviation
Resumo:
Os coeficientes de difusão (D 12) são propriedades fundamentais na investigação e na indústria, mas a falta de dados experimentais e a inexistência de equações que os estimem com precisão e confiança em fases comprimidas ou condensadas constituem limitações importantes. Os objetivos principais deste trabalho compreendem: i) a compilação de uma grande base de dados para valores de D 12 de sistemas gasosos, líquidos e supercríticos; ii) o desenvolvimento e validação de novos modelos de coeficientes de difusão a diluição infinita, aplicáveis em amplas gamas de temperatura e densidade, para sistemas contendo componentes muito distintos em termos de polaridade, tamanho e simetria; iii) a montagem e teste de uma instalação experimental para medir coeficientes de difusão em líquidos e fluidos supercríticos. Relativamente à modelação, uma nova expressão para coeficientes de difusão a diluição infinita de esferas rígidas foi desenvolvida e validada usando dados de dinâmica molecular (desvio relativo absoluto médio, AARD = 4.44%) Foram também estudados os coeficientes de difusão binários de sistemas reais. Para tal, foi compilada uma extensa base de dados de difusividades de sistemas reais em gases e solventes densos (622 sistemas binários num total de 9407 pontos experimentais e 358 moléculas) e a mesma foi usada na validação dos novos modelos desenvolvidos nesta tese. Um conjunto de novos modelos foi proposto para o cálculo de coeficientes de difusão a diluição infinita usando diferentes abordagens: i) dois modelos de base molecular com um parâmetro específico para cada sistema, aplicáveis em sistemas gasosos, líquidos e supercríticos, em que natureza do solvente se encontra limitada a apolar ou fracamente polar (AARDs globais na gama 4.26-4.40%); ii) dois modelos de base molecular biparamétricos, aplicáveis em todos os estados físicos, para qualquer tipo de soluto diluído em qualquer solvente (apolar, fracamente polar e polar). Ambos os modelos dão origem a erros globais entre 2.74% e 3.65%; iii) uma correlação com um parâmetro, específica para coeficientes de difusão em dióxido de carbono supercrítico (SC-CO2) e água líquida (AARD = 3.56%); iv) nove correlações empíricas e semi-empíricas que envolvem dois parâmetros, dependentes apenas da temperatura e/ou densidade do solvente e/ou viscosidade do solvente. Estes últimos modelos são muito simples e exibem excelentes resultados (AARDs entre 2.78% e 4.44%) em sistemas líquidos e supercríticos; e v) duas equações preditivas para difusividades de solutos em SC-CO2, em que os erros globais de ambas são inferiores a 6.80%. No global, deve realçar-se o facto de os novos modelos abrangerem a grande variedade de sistemas e moléculas geralmente encontrados. Os resultados obtidos são consistentemente melhores do que os obtidos com os modelos e abordagens encontrados na literatura. No caso das correlações com um ou dois parâmetros, mostrou-se que estes mesmos parâmetros podem ser ajustados usando um conjunto muito pequeno de dados, e posteriormente serem utilizados na previsão de valores de D 12 longe do conjunto original de pontos. Uma nova instalação experimental para medir coeficientes de difusão binários por técnicas cromatográficas foi montada e testada. O equipamento, o procedimento experimental e os cálculos analíticos necessários à obtenção dos valores de D 12 pelo método de abertura do pico cromatográfico, foram avaliados através da medição de difusividades de tolueno e acetona em SC-CO2. Seguidamente, foram medidos coeficientes de difusão de eucaliptol em SC-CO2 nas gamas de 202 – 252 bar e 313.15 – 333.15 K. Os resultados experimentais foram analisados através de correlações e modelos preditivos para D12.
Resumo:
The work presented describes the development and evaluation of two flow-injection analysis (FIA) systems for the automated determination of carbaryl in spiked natural waters and commercial formulations. Samples are injected directly into the system where they are subjected to alkaline hydrolysis thus forming 1-naphthol. This product is readily oxidised at a glassy carbon electrode. The electrochemical behaviour of 1-naphthol allows the development of an FIA system with an amperometric detector in which 1-naphthol determination, and thus measurement of carbaryl concentration, can be performed. Linear response over the range 1.0×10–7 to 1.0×10–5 mol L–1, with a sampling rate of 80 samples h–1, was recorded. The detection limit was 1.0×10–8 mol L–1. Another FIA manifold was constructed but this used a colorimetric detector. The methodology was based on the coupling of 1-naphthol with phenylhydrazine hydrochloride to produce a red complex which has maximum absorbance at 495 nm. The response was linear from 1.0×10–5 to 1.5×10–3 mol L–1 with a detection limit of 1.0×10–6 mol L–1. Sample-throughput was about 60 samples h–1. Validation of the results provided by the two FIA methodologies was performed by comparing them with results from a standard HPLC–UV technique. The relative deviation was <5%. Recovery trials were also carried out and the values obtained ranged from 97.0 to 102.0% for both methods. The repeatability (RSD, %) of 12 consecutive injections of one sample was 0.8% and 1.6% for the amperometric and colorimetric systems, respectively.
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
This work describes the electroanalytical determination of pendimethalin herbicide levels in natural waters, river sediment and baby food samples, based on the electro-reduction of herbicide on the hanging mercury drop electrode using square wave voltammetry (SWV). A number of experimental and voltammetric conditions were evaluated and the best responses were achieved in Britton-Robinson buffer solutions at pH 8.0, using a frequency of 500 s(-1). a scan increment of 10 mV and a square wave amplitude of 50 mV. Under these conditions, the pendimethalin is reduced in an irreversible process, with two reduction peaks at -0.60 V and -0.71 V. using a Ag/AgCl reference system. Analytical curves were constructed and the detection limit values were calculated to be 7.79 mu g L(-1) and 4.88 mu g L(-1), for peak 1 and peak 2, respectively. The precision and accuracy were determinate as a function of experimental repeatability and reproducibility, which showed standard relative deviation values that were lower than 2% for both voltammetric peaks. The applicability of the proposed methodology was evaluated in natural water, river sediments and baby food samples. The calculated recovery efficiencies demonstrate that the proposed methodology is suitable for determining any contamination by pendimethalin in these samples. Additionally, adsorption isotherms were used to evaluate information about the behavior of pendimethalin in river sediment samples. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Yield mapping represents the spatial variability concerning the features of a productive area and allows intervening on the next year production, for example, on a site-specific input application. The trial aimed at verifying the influence of a sampling density and the type of interpolator on yield mapping precision to be produced by a manual sampling of grains. This solution is usually adopted when a combine with yield monitor can not be used. An yield map was developed using data obtained from a combine equipped with yield monitor during corn harvesting. From this map, 84 sample grids were established and through three interpolators: inverse of square distance, inverse of distance and ordinary kriging, 252 yield maps were created. Then they were compared with the original one using the coefficient of relative deviation (CRD) and the kappa index. The loss regarding yield mapping information increased as the sampling density decreased. Besides, it was also dependent on the interpolation method used. A multiple regression model was adjusted to the variable CRD, according to the following variables: spatial variability index and sampling density. This model aimed at aiding the farmer to define the sampling density, thus, allowing to obtain the manual yield mapping, during eventual problems in the yield monitor.
Resumo:
Clinical efficacy of aerosol therapy in premature newborns depends on the efficiency of delivery of aerosolized drug to the bronchial tree. To study the influence of various anatomical, physical, and physiological factors on aerosol delivery in preterm newborns, it is crucial to have appropriate in vitro models, which are currently not available. We therefore constructed the premature infant nose throat-model (PrINT-Model), an upper airway model corresponding to a premature infant of 32-wk gestational age by three-dimensional (3D) reconstruction of a three-planar magnetic resonance imaging scan and subsequent 3D-printing. Validation was realized by visual comparison and comparison of total airway volume. To study the feasibility of measuring aerosol deposition, budesonide was aerosolized through the cast and lung dose was expressed as percentage of nominal dose. The airway volumes of the initial magnetic resonance imaging and validation computed tomography scan showed a relative deviation of 0.94%. Lung dose at low flow (1 L/min) was 61.84% and 9.00% at high flow (10 L/min), p < 0.0001. 3D-reconstruction provided an anatomically accurate surrogate of the upper airways of a 32-wk-old premature infant, making the model suitable for future in vitro testing.
Resumo:
The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.
Resumo:
Hallstätter Glacier is the northernmost glacier of Austria. Appendant to the northern Limestone Alps, the glacier is located at 47°28'50'' N, 13°36'50'' E in the Dachstein-region. At the same time with its advance linked to the Little Ice Age (LIA), research on changes in size and mass of Hallstätter glacier was started in 1842 by Friedrich Simony. He observed and documented the glacier retreat related to its last maximum extension in 1856. In addition, Hallstätter Glacier is a subject to scientific research to date. In this thesis methods and results of ongoing mass balance measurements are presented and compared to long term volume changes and meteorological observations. The current mass balance monitoring programm using the direct glaciological method was started 2006. In this context, 2009 the ice thickness was measured with ground penetrating radar. The result are used with digital elevation models reconstucted from historical maps and recent digital elevation models to calculate changes in shape and volume of Hallstätter Glacier. Based on current meteorological measurements near the glacier and longtime homogenized climate data provided by HISTALP, time series of precipitation and temperature beginning at the LIA are produced. These monthly precipitation and monthly mean temperature data are used to compare results of a simple degree day model with the volume change calculated from the difference of the digital elevation models. The two years of direct mass balance measurements are used to calibrate the degree day model. A number of possible future scenarios are produced to indicate prospective changes. Within the 150-year-period between 1856 and 2007 the Hallstätter Glacier lost 1940 meters of its length and 2.23 km**2 in area. 37% of the initial volume of 1856 remained. This retreat came along with a change in climate. The application of a running avarage of 30 years shows an increase in precipitation of 18.5% and a warming of 1.3°C near the glacier between 1866 and 1993. The mass loss was continued in the hydrological years 2006/2007 and 2007/2008 showing mean specific mass balance of -376 mm and -700 mm, respectively. Applying a temperature correction for the different minimum elevations of the glacier, the degree day approach based on the two measured mass balances can reproduce sign and order of magnitude of the volume change of Hallstätter Glacier since 1856. Nevertheless, the relative deviation is significant. Future scenarios show, that 30% of the entire glacier volume remains after subtracting the elevation changes between the digital elevation models of 2002 and 2007 ten times from the surface of 2007. The past and present mass changes of Hallstätter Glacier are showing a retreating glacier as a consequence of rising temperatures. Due to high precepitation, increased with previous warming, the Hallstätter Glacier can and will exist in lower elevation compared to inner alpine glaciers.
Resumo:
The neutron Howitzer container at the Neutron Measurements Laboratory of the Nuclear Engineering Department of the Polytechnic University of Madrid (UPM), is equipped with a 241Am-Be neutron source of 74 GBq in its center. The container allows the source to be in either the irradiation or the storage position. To measure the neutron fluence rate spectra around the Howitzer container, measurements were performed using a Bonner spheres spectrometer and the spectra were unfolded using the NSDann program. A calibrated neutron area monitor LB6411 was used to measure the ambient dose equivalent rates, H*(10). Detailed Monte-Carlo simulations were performed to calculate the measured quantities at the same positions. The maximum relative deviation between simulations and measurements was 19.53%. After validation, the simulated model was used to calculate the equivalent dose rate in several key organs of a voxel phantom. The computed doses in the skin and lenses of the eyes are within the ICRP recommended dose limits, as is the H*(10) value for the storage position.
Resumo:
La comparaison des résultats de l’analyse dendrochronologique et dendroclimatologique du Pin d’Alep de la forêt domaniale de Tlemcen a été réalisée en conditions stationnelles particulières. L’analyse de la croissance des cernes annuels et des rapports des écarts relatifs des cernes successifs montrent une nette tendance régressive chez les arbres jeunes. La sensibilité moyenne (SM) et les coefficients d’inter datation (SR) respectifs aux jeunes arbres et aux plus âgés confirment la dépendance assez forte des premiers aux facteurs climatiques particulièrement à la pluviométrie. Les résultats de ce travail ont permis d’établir une relation pluviométrie / accroissement radial en fonction de l’âge de leur formation. Ainsi, il est établi qu’à partir de ces résultats particulièrement ceux des valeurs des sensibilités moyennes (SM) que les conséquences probables des variations climatiques influentes sensiblement sur les jeunes sujets. Aussi pour éviter les changements d’ordre physio biologiques liés au vieillissement des arbres, il est préférable de comparer les cernes sur une période de 40 à 50 ans comme c’est le cas des six échantillons choisis dans la zone d’étude.
Resumo:
Ironically, the “learning of percent” is one of the most problematic aspects of school mathematics. In our view, these difficulties are not associated with the arithmetic aspects of the “percent problems”, but mostly with two methodological issues: firstly, providing students with a simple and accurate understanding of the rationale behind the use of percent, and secondly - overcoming the psychological complexities of the fluent and comprehensive understanding by the students of the sometimes specific wordings of “percent problems”. Before we talk about percent, it is necessary to acquaint students with a much more fundamental and important (regrettably, not covered by the school syllabus) classical concepts of quantitative and qualitative comparison of values, to give students the opportunity to learn the relevant standard terminology and become accustomed to conventional turns of speech. Further, it makes sense to briefly touch on the issue (important in its own right) of different representations of numbers. Percent is just one of the technical, but common forms of data representation: p% = p × % = p × 0.01 = p × 1/100 = p/100 = p × 10-2 "Percent problems” are involved in just two cases: I. The ratio of a variation m to the standard M II. The relative deviation of a variation m from the standard M The hardest and most essential in each specific "percent problem” is not the routine arithmetic actions involved, but the ability to figure out, to clearly understand which of the variables involved in the problem instructions is the standard and which is the variation. And in the first place, this is what teachers need to patiently and persistently teach their students. As a matter of fact, most primary school pupils are not yet quite ready for the lexical specificity of “percent problems”. ....Math teachers should closely, hand in hand with their students, carry out a linguistic analysis of the wording of each problem ... Schoolchildren must firmly understand that a comparison of objects is only meaningful when we speak about properties which can be objectively expressed in terms of actual numerical characteristics. In our opinion, an adequate acquisition of the teaching unit on percent cannot be achieved in primary school due to objective psychological specificities related to this age and because of the level of general training of students. Yet, if we want to make this topic truly accessible and practically useful, it should be taught in high school. A final question to the reader (quickly, please): What is greater: % of e or e% of Pi
Resumo:
A significant part of the life of a mechanical component occurs, the crack propagation stage in fatigue. Currently, it is had several mathematical models to describe the crack growth behavior. These models are classified into two categories in terms of stress range amplitude: constant and variable. In general, these propagation models are formulated as an initial value problem, and from this, the evolution curve of the crack is obtained by applying a numerical method. This dissertation presented the application of the methodology "Fast Bounds Crack" for the establishment of upper and lower bounds functions for model evolution of crack size. The performance of this methodology was evaluated by the relative deviation and computational times, in relation to approximate numerical solutions obtained by the Runge-Kutta method of 4th explicit order (RK4). Has been reached a maximum relative deviation of 5.92% and the computational time was, for examples solved, 130,000 times more higher than achieved by the method RK4. Was performed yet an Engineering application in order to obtain an approximate numerical solution, from the arithmetic mean of the upper and lower bounds obtained in the methodology applied in this work, when you don’t know the law of evolution. The maximum relative error found in this application was 2.08% which proves the efficiency of the methodology "Fast Bounds Crack".
Resumo:
Ionic liquids (ILs) have attracted great attention, from both industry and academia, as alternative fluids for very different types of applications. The large number of cations and anions allow a wide range of physical and chemical characteristics to be designed. However, the exhaustive measurement of all these systems is impractical, thus requiring the use of a predictive model for their study. In this work, the predictive capability of the conductor-like screening model for real solvents (COSMO-RS), a model based on unimolecular quantum chemistry calculations, was evaluated for the prediction water activity coefficient at infinite dilution, gamma(infinity)(w), in several classes of ILs. A critical evaluation of the experimental and predicted data using COSMO-RS was carried out. The global average relative deviation was found to be 27.2%, indicating that the model presents a satisfactory prediction ability to estimate gamma(infinity)(w) in a broad range of ILs. The results also showed that the basicity of the ILs anions plays an important role in their interaction with water, and it considerably determines the enthalpic behavior of the binary mixtures composed by Its and water. Concerning the cation effect, it is possible to state that generally gamma(infinity)(w) increases with the cation size, but it is shown that the cation-anion interaction strength is also important and is strongly correlated to the anion ability to interact with water. The results here reported are relevant in the understanding of ILs-water interactions and the impact of the various structural features of its on the gamma(infinity)(w) as these allow the development of guidelines for the choice of the most suitable lLs with enhanced interaction with water.
Resumo:
To assess the quality of school education, much of educational research is concerned with comparisons of test scores means or medians. In this paper, we shift this focus and explore test scores data by addressing some often neglected questions. In the case of Brazil, the mean of test scores in Math for students of the fourth grade has declined approximately 0,2 standard deviation in the late 1990s. But what about changes in the distribution of scores? It is unclear whether the decline was caused by deterioration in student performance in upper and/or lower tails of the distribution. To answer this question, we propose the use of the relative distribution method developed by Handcock and Morris (1999). The advantage of this methodology is that it compares two distributions of test scores data through a single distribution and synthesizes all the differences between them. Moreover, it is possible to decompose the total difference between two distributions in a level effect (changes in median) and shape effect (changes in shape of the distribution). We find that the decline of average-test scores is mainly caused by a worsening in the position of all students throughout the distribution of scores and is not only specific to any quantile of distribution.