986 resultados para Bounded relative error


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Local head losses must be considered in estimating properly the maximum length of drip irrigation laterals. The aim of this work was to develop a model based on dimensional analysis for calculating head loss along laterals accounting for in-line drippers. Several measurements were performed with 12 models of emitters to obtain the experimental data required for developing and assessing the model. Based on the Camargo & Sentelhas coefficient, the model presented an excellent result in terms of precision and accuracy on estimating head loss. The deviation between estimated and observed values of head loss increased according to the head loss and the maximum deviation reached 0.17 m. The maximum relative error was 33.75% and only 15% of the data set presented relative errors higher than 20%. Neglecting local head losses incurred a higher than estimated maximum lateral length of 19.48% for pressure-compensating drippers and 16.48% for non pressure-compensating drippers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of limiting dilution assay (LDA) for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wird eine gemeinsame Optimierung der Hybrid-Betriebsstrategie und des Verhaltens des Verbrennungsmotors vorgestellt. Die Übernahme von den im Steuergerät verwendeten Funktionsmodulen in die Simulationsumgebung für Fahrzeuglängsdynamik stellt eine effiziente Applikationsmöglichkeit der Originalparametrierung dar. Gleichzeitig ist es notwendig, das Verhalten des Verbrennungsmotors derart nachzubilden, dass das stationäre und das dynamische Verhalten, inklusive aller relevanten Einflussmöglichkeiten, wiedergegeben werden kann. Das entwickelte Werkzeug zur Übertragung der in Ascet definierten Steurgerätefunktionen in die Simulink-Simulationsumgebung ermöglicht nicht nur die Simulation der relevanten Funktionsmodule, sondern es erfüllt auch weitere wichtige Eigenschaften. Eine erhöhte Flexibilität bezüglich der Daten- und Funktionsstandänderungen, sowie die Parametrierbarkeit der Funktionsmodule sind Verbesserungen die an dieser Stelle zu nennen sind. Bei der Modellierung des stationären Systemverhaltens des Verbrennungsmotors erfolgt der Einsatz von künstlichen neuronalen Netzen. Die Auswahl der optimalen Neuronenanzahl erfolgt durch die Betrachtung des SSE für die Trainings- und die Verifikationsdaten. Falls notwendig, wird zur Sicherstellung der angestrebten Modellqualität, das Interpolationsverhalten durch Hinzunahme von Gauß-Prozess-Modellen verbessert. Mit den Gauß-Prozess-Modellen werden hierbei zusätzliche Stützpunkte erzeugt und mit einer verminderten Priorität in die Modellierung eingebunden. Für die Modellierung des dynamischen Systemverhaltens werden lineare Übertragungsfunktionen verwendet. Bei der Minimierung der Abweichung zwischen dem Modellausgang und den Messergebnissen wird zusätzlich zum SSE das 2σ-Intervall der relativen Fehlerverteilung betrachtet. Die Implementierung der Steuergerätefunktionsmodule und der erstellten Steller-Sensor-Streckenmodelle in der Simulationsumgebung für Fahrzeuglängsdynamik führt zum Anstieg der Simulationszeit und einer Vergrößerung des Parameterraums. Das aus Regelungstechnik bekannte Verfahren der Gütevektoroptimierung trägt entscheidend zu einer systematischen Betrachtung und Optimierung der Zielgrößen bei. Das Ergebnis des Verfahrens ist durch das Optimum der Paretofront der einzelnen Entwurfsspezifikationen gekennzeichnet. Die steigenden Simulationszeiten benachteiligen Minimumsuchverfahren, die eine Vielzahl an Iterationen benötigen. Um die Verwendung einer Zufallsvariablen, die maßgeblich zur Steigerung der Iterationanzahl beiträgt, zu vermeiden und gleichzeitig eine Globalisierung der Suche im Parameterraum zu ermöglichen wird die entwickelte Methode DelaunaySearch eingesetzt. Im Gegensatz zu den bekannten Algorithmen, wie die Partikelschwarmoptimierung oder die evolutionären Algorithmen, setzt die neu entwickelte Methode bei der Suche nach dem Minimum einer Kostenfunktion auf eine systematische Analyse der durchgeführten Simulationsergebnisse. Mit Hilfe der bei der Analyse gewonnenen Informationen werden Bereiche mit den bestmöglichen Voraussetzungen für ein Minimum identifiziert. Somit verzichtet das iterative Verfahren bei der Bestimmung des nächsten Iterationsschrittes auf die Verwendung einer Zufallsvariable. Als Ergebnis der Berechnungen steht ein gut gewählter Startwert für eine lokale Optimierung zur Verfügung. Aufbauend auf der Simulation der Fahrzeuglängsdynamik, der Steuergerätefunktionen und der Steller-Sensor-Streckenmodelle in einer Simulationsumgebung wird die Hybrid-Betriebsstrategie gemeinsam mit der Steuerung des Verbrennungsmotors optimiert. Mit der Entwicklung und Implementierung einer neuen Funktion wird weiterhin die Verbindung zwischen der Betriebsstrategie und der Motorsteuerung erweitert. Die vorgestellten Werkzeuge ermöglichten hierbei nicht nur einen Test der neuen Funktionalitäten, sondern auch eine Abschätzung der Verbesserungspotentiale beim Verbrauch und Abgasemissionen. Insgesamt konnte eine effiziente Testumgebung für eine gemeinsame Optimierung der Betriebsstrategie und des Verbrennungsmotorverhaltens eines Hybridfahrzeugs realisiert werden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A relatively simple, selective, precise and accurate high performance liquid chromatography (HPLC) method based on a reaction of phenylisothiocyanate (PITC) with glucosamine (GL) in alkaline media was developed and validated to determine glucosamine hydrochloride permeating through human skin in vitro. It is usually problematic to develop an accurate assay for chemicals traversing skin because the excellent barrier properties of the tissue ensure that only low amounts of the material pass through the membrane and skin components may leach out of the tissue to interfere with the analysis. In addition, in the case of glucosamine hydrochloride, chemical instability adds further complexity to assay development. The assay, utilising the PITC-GL reaction was refined by optimizing the reaction temperature, reaction time and PITC concentration. The reaction produces a phenylthiocarbarnyl-glucosamine (PTC-GL) adduct which was separated on a reverse-phase (RP) column packed with 5 mu m ODS (C-18) Hypersil particles using a diode array detector (DAD) at 245 nm. The mobile phase was methanol-water-glacial acetic acid (10:89.96:0.04 v/v/v, pH 3.5) delivered to the column at 1 ml min(-1) and the column temperature was maintained at 30 degrees C Using a saturated aqueous solution of glucosamine hydrochloride, in vitro permeation studies were performed at 32 +/- 1 degrees C over 48 h using human epidermal membranes prepared by a heat separation method and mounted in Franz-type diffusion cells with a diffusional area 2.15 +/- 0.1 cm(2). The optimum derivatisation reaction conditions for reaction temperature, reaction time and PITC concentration were found to be 80 degrees C, 30 min and 1 % v/v, respectively. PTC-Gal and GL adducts eluted at 8.9 and 9.7 min, respectively. The detector response was found to be linear in the concentration range 0-1000 mu g ml(-1). The assay was robust with intra- and inter-day precisions (described as a percentage of relative standard deviation, %R.S.D.) < 12. Intra- and inter-day accuracy (as a percentage of the relative error, %RE) was <=-5.60 and <=-8.00, respectively. Using this assay, it was found that GL-HCI permeates through human skin with a flux 1.497 +/- 0.42 mu g cm(-2) h(-1), a permeability coefficient of 5.66 +/- 1.6 x 10(-6) cm h(-1) and with a lag time of 10.9 +/- 4.6 h. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes new advances in the exploitation of oxygen A-band measurements from POLDER3 sensor onboard PARASOL, satellite platform within the A-Train. These developments result from not only an account of the dependence of POLDER oxygen parameters to cloud optical thickness τ and to the scene's geometrical conditions but also, and more importantly, from the finer understanding of the sensitivity of these parameters to cloud vertical extent. This sensitivity is made possible thanks to the multidirectional character of POLDER measurements. In the case of monolayer clouds that represent most of cloudy conditions, new oxygen parameters are obtained and calibrated from POLDER3 data colocalized with the measurements of the two active sensors of the A-Train: CALIOP/CALIPSO and CPR/CloudSat. From a parameterization that is (μs, τ) dependent, with μs the cosine of the solar zenith angle, a cloud top oxygen pressure (CTOP) and a cloud middle oxygen pressure (CMOP) are obtained, which are estimates of actual cloud top and middle pressures (CTP and CMP). Performances of CTOP and CMOP are presented by class of clouds following the ISCCP classification. In 2008, the coefficient of the correlation between CMOP and CMP is 0.81 for cirrostratus, 0.79 for stratocumulus, 0.75 for deep convective clouds. The coefficient of the correlation between CTOP and CTP is 0.75, 0.73, and 0.79 for the same cloud types. The score obtained by CTOP, defined as the confidence in the retrieval for a particular range of inferred value and for a given error, is higher than the one of MODIS CTP estimate. Scores of CTOP are the highest for bin value of CTP superior in numbers. For liquid (ice) clouds and an error of 30 hPa (50 hPa), the score of CTOP reaches 50% (70%). From the difference between CTOP and CMOP, a first estimate of the cloud vertical extent h is possible. A second estimate of h comes from the correlation between the angular standard deviation of POLDER oxygen pressure σPO2 and the cloud vertical extent. This correlation is studied in detail in the case of liquid clouds. It is shown to be spatially and temporally robust, except for clouds above land during winter months. The analysis of the correlation's dependence on the scene's characteristics leads to a parameterization providing h from σPO2. For liquid water clouds above ocean in 2008, the mean difference between the actual cloud vertical extent and the one retrieved from σPO2 (from the pressure difference) is 5 m (−12 m). The standard deviation of the mean difference is close to 1000 m for the two methods. POLDER estimates of the cloud geometrical thickness obtain a global score of 50% confidence for a relative error of 20% (40%) of the estimate for ice (liquid) clouds over ocean. These results need to be validated outside of the CALIPSO/CloudSat track.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that leads to fully smooth subproblems. We first enhance the existing theory to show that our approach is globally convergent in both the primal and dual spaces when applied to convex problems. We then present an extensive computational evaluation using the CUTE test set, showing that some aspects of our approach are promising, but some are not. These conclusions in turn lead to additional computational experiments suggesting where to next focus our theoretical and computational efforts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper builds on Lucas (2000) and on Cysne (2003) to derive and order six alternative measures of the welfare costs of inflation (five of which already existing in the literature) for any vector of opportunity costs. The ordering of the functions is carried out for economies with or without interestbearing deposits. We provide examples and closed-form solutions for the log-log money demand both in the unidimensional and in the multidimensional setting (when interest-bearing monies are present). An estimate of the maximum relative error a researcher can incur when using any particular measure is also provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tese é uma coleção de quatro artigos em economia monetária escritos sob a supervisão do Professor Rubens Penha Cysne. O primeiro desses artigos calcula o viés presente em medidas do custo de bem-estar da inflação devido a não se levar em conta o potencial substitutivo de moedas que rendem juros, como depósitos bancários.[1] O segundo se concentra na questão teórica de se comparar os escopos dos tradicionais modelos money-in-the-utility-function e shopping-time através do estudo das propriedades das curvas de demanda que eles geram.[2] O terceiro desses trabalhos revisita um artigo clássico de Stanley Fischer sobre a correlação entre a taxa de crescimento da oferta monetária e a taxa de acumulação de capital no caminho de transição.[3] Finalmente, o quarto diz respeito à posição relativa de cada uma de seis medidas do custo de bem-estar da inflação (uma das quais é nova) em relação às outras cinco, e uma estimativa do erro relativo máximo em que o pesquisador pode incorrer devido a sua escolha de empregar uma dessas medidas qualquer vis-à-vis as outras.[4] This thesis collects four papers on monetary economics written under the supervision of Professor Rubens Penha Cysne. The first of these papers assesses the bias occuring in welfare-cost-of-inflation measures due to failing to take into consideration the substitution potential of interest-bearing monies such as bank deposits.[1] The second one tackles the theoretical issue of comparing the generality of the money-in-the-utility-function- and the shopping-time models by studying the properties of the demand curves they generate.[2] The third of these works revisits a classic paper by Stanley Fischer on the correlation between the growth rate of money supply and the rate of capital accumulation on the transition path.[3] Finally, the fourth one concerns the relative standing of each one of six measures of the welfare cost of inflation (one of which is new) with respect to the other five, and an estimate of the maximum relative error one can incur by choosing to employ a particular welfare measure in place of the others.[4] [1] Cysne, R.P., Turchick, D., 2010. Welfare costs of inflation when interest-bearing deposits are disregarded: A calculation of the bias. Journal of Economic Dynamics and Control 34, 1015-1030. [2] Cysne, R.P., Turchick, D., 2009. On the integrability of money-demand functions by the Sidrauski and the shopping-time models. Journal of Banking & Finance 33, 1555-1562. [3] Cysne, R.P., Turchick, D., 2010. Money supply and capital accumulation on the transition path revisited. Journal of Money, Credit and Banking 42, 1173-1184. [4] Cysne, R.P., Turchick, D., 2011. An ordering of measures of the welfare cost of inflation in economies with interest-bearing deposits. Macroeconomic Dynamics, forthcoming.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several Brazilian commercial gasoline physicochemical parameters, such as relative density, distillation curve (temperatures related to 10%, 50% and 90% of distilled volume, final boiling point and residue), octane numbers (motor and research octane number and anti-knock index), hydrocarbon compositions (olefins, aromatics and saturates) and anhydrous ethanol and benzene content was predicted from chromatographic profiles obtained by flame ionization detection (GC-FID) and using partial least square regression (PLS). GC-FID is a technique intensively used for fuel quality control due to its convenience, speed, accuracy and simplicity and its profiles are much easier to interpret and understand than results produced by other techniques. Another advantage is that it permits association with multivariate methods of analysis, such as PLS. The chromatogram profiles were recorded and used to deploy PLS models for each property. The standard error of prediction (SEP) has been the main parameter considered to select the "best model". Most of GC-FID-PLS results, when compared to those obtained by the Brazilian Government Petroleum, Natural Gas and Biofuels Agency - ANP Regulation 309 specification methods, were very good. In general, all PLS models developed in these work provide unbiased predictions with lows standard error of prediction and percentage average relative error (below 11.5 and 5.0, respectively). (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work a new method is proposed of separated estimation for the ARMA spectral model based on the modified Yule-Walker equations and on the least squares method. The proposal of the new method consists of performing an AR filtering in the random process generated obtaining a new random estimate, which will reestimate the ARMA model parameters, given a better spectrum estimate. Some numerical examples will be presented in order to ilustrate the performance of the method proposed, which is evaluated by the relative error and the average variation coefficient.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEIS

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA