971 resultados para Bounded relative error
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Hemolysis is the main cause of biochemical analysis rejection's in veterinary laboratories, however the relative error caused by hemoglobin on serum biochemical profile has not been properly established on several species. In order to establish criteria for aproval and rejection of hemolyzed samples for serum biochemical tests, the hypothesis that hemolysis causes biochemical changes in canine, cattle and horses and that laboratorial error depends on species and hemolysis degree was tested. Thus, non-hemolyzed serum was contaminated with crescent hemoglobin levels and using commercial routine reagents, the serum concentrations of uric acid, albumin, cholesterol, triglycerides and urea, besides the activity of ALT, AST, CK and GUT were quantified in triplicate samples. The relative error was calculated by the comparison between hemolyzed and non-hemolyzed samples. Hemolys is did not cause significant error on the albumin determination in all three species, AST in canine and cattle, ALT in horses, UK and cholesterol in canine. There was a linear increase on uric acid levels in horses and cattle, triglycerides in all three species. A linear increase in serum urea in all species serum, UK and cholesterol in cattle and cholesterol in horses was observed. Serum AST activity on equine serum and ALT in cattle decreased linearly due to hemolysis. It was concluded that hemolysis promotes changes in canine, equine and bovine serum chemistry profile, however the laboratorial error not necessarily compromises the diagnosis in all cases, because the changes depends on species and degree of in vitro hemolysis.
Resumo:
The effect of ultrasound and osmotic dehydration pretreatments on papaya drying kinetics was investigated. The ultrasound pretreatment was carried out in an ultrasonic bath at 30 A degrees C. The osmotic pretreatment in sucrose solution was carried out in an incubator at 34 A degrees C and agitation of 80 rpm for 210 min. The drying process was conducted in a fixed bed dryer at 70 A degrees C. Experimental data were fitted successfully using the Page model for dried fresh and pretreated fruits, with coefficient of determination greater than 0.9992 and average relative error lower that 14.4 %. The diffusional model was used to describe the moisture transfer, and the effective water diffusivity was identified in the order of 10(-9) m(2) s(-1). It was found that drying rates of osmosed fruits were the lowest due to the presence of infused solutes, while the ultrasound pretreatment contributed to faster drying rates. Evaluation of the dried fruit was performed by means of total carotenoids retention. Ultrasound treatments in distilled water prior to air-drying gave rise to dried papayas with retention of carotenoids in the range 30.4-39.8 % and the ultrasonic-assisted osmotic dehydration of papayas showed carotenoids retention values up to 64.9 %, whereas the dried fruit without pretreatment showed carotenoids retention lower than 24 %.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Estimation of the lower flammability limits of C-H compounds at 25 degrees C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 degrees C H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C H compounds. When tested for a temperature range from 5 degrees C to 100 degrees C , the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Aboveground tropical tree biomass and carbon storage estimates commonly ignore tree height (H). We estimate the effect of incorporating H on tropics-wide forest biomass estimates in 327 plots across four continents using 42 656 H and diameter measurements and harvested trees from 20 sites to answer the following questions: 1. What is the best H-model form and geographic unit to include in biomass models to minimise site-level uncertainty in estimates of destructive biomass? 2. To what extent does including H estimates derived in (1) reduce uncertainty in biomass estimates across all 327 plots? 3. What effect does accounting for H have on plot- and continental-scale forest biomass estimates? The mean relative error in biomass estimates of destructively harvested trees when including H (mean 0.06), was half that when excluding H (mean 0.13). Power- and Weibull-H models provided the greatest reduction in uncertainty, with regional Weibull-H models preferred because they reduce uncertainty in smaller-diameter classes (< 40 cm D) that store about one-third of biomass per hectare in most forests. Propagating the relationships from destructively harvested tree biomass to each of the 327 plots from across the tropics shows that including H reduces errors from 41.8 Mg ha(-1) (range 6.6 to 112.4) to 8.0 Mg ha(-1) (-2.5 to 23.0).
Resumo:
Purpose: To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. Methods: One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab(US Patent). A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE<0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Results: Bio-Optics: sample size, 97 +/- 22 cells; RE, 6.52 +/- 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 162 +/- 34 cells. CSO: sample size, 110 +/- 20 cells; RE, 5.98 +/- 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 157 +/- 45 cells. Konan: sample size, 80 +/- 27 cells; RE, 10.6 +/- 3.67; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 336 +/- 131 cells. Topcon: sample size, 87 +/- 17 cells; RE, 10.1 +/- 2.52; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 382 +/- 159 cells. Conclusions: A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Resumo:
CONTEXTUALIZAÇÃO: A biofotogrametria é uma técnica difundida na área da saúde e, apesar dos cuidados metodológicos, há distorções nas leituras angulares das imagens fotográficas. OBJETIVO: Mensurar o erro das medidas angulares em imagens fotográficas com diferentes resoluções digitais em um objeto com ângulos pré-demarcados. MÉTODOS: Utilizou-se uma esfera de borracha com 52 cm de circunferência. O objeto foi previamente demarcado com ângulos de 10º, 30º, 60º e 90º, e os registros fotográficos foram realizados com o eixo focal da câmera a três metros de distância e perpendicular ao objeto, sem utilização de zoom óptico e com resolução de 3, 5 e 10 Megapixels (Mp). Todos os registros fotográficos foram armazenados, e os valores angulares foram analisados por um experimentador previamente treinado, utilizando o programa ImageJ. As aferições das medidas foram realizadas duas vezes, com intervalo de 15 dias entre elas. Posteriormente, foram calculados os valores de acurácia, erro relativo e em graus, precisão e Coeficiente de Correlação Intraclasse (ICC). RESULTADOS: Quando analisado o ângulo de 10º, a média da acurácia das medidas foi maior para os registros com resolução de 3 Mp em relação às resoluções de 5 e 10 Mp. O ICC foi considerado excelente para as três resoluções de imagem analisadas e, em relação aos ângulos analisados nos registros fotográficos, pôde-se verificar maior acurácia, menor erro relativo e em graus e maior precisão para o ângulo de 90º, independentemente da resolução da imagem. CONCLUSÃO: Os registros fotográficos realizados com a resolução de 3 Mp proporcionaram medidas de maiores valores de acurácia e precisão e menores valores de erro, sugerindo ser a resolução mais adequada para gerar imagem de ângulos de 10º e 30º.
Resumo:
In der vorliegenden Arbeit werden Entwicklung und Test einesneuartigen Interferometers mit zwei örtlich separierten,phasenkorrelierten Röntgenquellen zur Messung des Realteilsdes komplexen Brechungsindex von dünnen, freitragendenFolien beschrieben. Die Röntgenquellen sind zwei Folien, indenen relativistische Elektronen der Energie 855 MeVÜbergangsstrahlung erzeugen. Das am Mainzer Mikrotron MAMIrealisierte Interferometer besteht aus einer Berylliumfolieeiner Dicke von 10 Mikrometer und einer Nickel-Probefolieeiner Dicke von 2.1 Mikrometer. Die räumlichenInterferenzstrukturen werden als Funktion desFolienabstandes in einer ortsauflösenden pn-CCD nach derFourier-Analyse des Strahlungsimpulses mittels einesSilizium-Einkristallspektrometers gemessen. Die Phase derIntensitätsoszillationen enthält Informationen über dieDispersion, die die in der strahlaufwärtigen Folie erzeugteWelle in der strahlabwärtigen Probefolie erfährt. AlsFallstudie wurde die Dispersion von Nickel im Bereich um dieK-Absorptionskane bei 8333 eV, sowie bei Photonenenergien um9930 eV gemessen. Bei beiden Energien wurden deutlicheInterferenzstrukturen nachgewiesen, wobei die Kohärenz wegenWinkelmischungen mit steigendem Folienabstand bzw.Beobachtungswinkel abnimmt. Es wurden Anpassungen vonSimulationsrechnungen an die Messdaten durchgeführt, die diekohärenzvermindernden Effekte berücksichtigen. Aus diesenAnpassungen konnte bei beiden untersuchten Energien dieDispersion der Nickelprobe mit einer relativen Genauigkeitvon kleiner gleich 1.5 % in guter Übereinstimmung mit derLiteratur bestimmt werden.
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
The Byrd Glacier discontinuity us a major boundary crossing the Ross Orogen, with crystalline rocks to the north and primarily sedimentary rocks to the south. Most models for the tectonic development of the Ross Orogen in the central Transantarctic Mountains consits of two-dimensional transects across the belt, but do not adress the major longitudinal contrast at Byrd Glacier. This paper presents a tectonic model centering on the Byrd Glacier discontinuity. Rifting in the Neoproterozoic producede a crustal promontory in the craton margin to the north of Byrd Glacier. Oblique convergence of the terrane (Beardmore microcontinent) during the latest Neroproterozoic and Early Cambrian was accompanied by subduction along the craton margin of East Antarctica. New data presented herein in the support of this hypothesis are U-Pb dates of 545.7 ± 6.8 Ma and 531.0 ± 7.5 Ma on plutonic rocks from the Britannia Range, subduction stepped out, and Byrd Glacier. After docking of the terrane, subduction stepped out, and Byrd Group was deposited during the Atdabanian-Botomian across the inner margin of the terrane. Beginning in the upper Botomian, reactivation of the sutured boundaries of the terrane resulted in an outpouring of clastic sediment and folding and faulting of the Byrd Group.