906 resultados para EQUATION-ERROR MODELS
Resumo:
Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems.
Resumo:
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Phi) at the nodal points of the mesh. The experimentally measured flux (U-measured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Phi) from U-measured(cal). In the first approach, the measurement data with a homogeneous phantom (U-measured(homo)) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (U-measured(hetero)) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) DOI: 10.1117/1.JBO.18.2.026023]
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
The statistical-mechanics theory of the passive scalar field convected by turbulence, developed in an earlier paper [Phys. Fluids 28, 1299 (1985)], is extended to the case of a small molecular Prandtl number. The set of governing integral equations is solved by the equation-error method. The resultant scalar-variance spectrum for the inertial range is F(k)~x−5/3/[1+1.21x1.67(1+0.353x2.32)], where x is the wavenumber scaled by Corrsin's dissipation wavenumber. This result reduces to the − (5)/(3) law in the inertial-convective range. It also approximately reduces to the − (17)/(3) law in the inertial-diffusive range, but the proportionality constant differs from Batchelor's by a factor of 3.6.
Resumo:
A technique for obtaining approximate periodic solutions to nonlinear ordinary differential equations is investigated. The approach is based on defining an equivalent differential equation whose exact periodic solution is known. Emphasis is placed on the mathematical justification of the approach. The relationship between the differential equation error and the solution error is investigated, and, under certain conditions, bounds are obtained on the latter. The technique employed is to consider the equation governing the exact solution error as a two point boundary value problem. Among other things, the analysis indicates that if an exact periodic solution to the original system exists, it is always possible to bound the error by selecting an appropriate equivalent system.
Three equivalence criteria for minimizing the differential equation error are compared, namely, minimum mean square error, minimum mean absolute value error, and minimum maximum absolute value error. The problem is analyzed by way of example, and it is concluded that, on the average, the minimum mean square error is the most appropriate criterion to use.
A comparison is made between the use of linear and cubic auxiliary systems for obtaining approximate solutions. In the examples considered, the cubic system provides noticeable improvement over the linear system in describing periodic response.
A comparison of the present approach to some of the more classical techniques is included. It is shown that certain of the standard approaches where a solution form is assumed can yield erroneous qualitative results.
Resumo:
Uma análise utilizando a série de Taylor é apresentada para se estimar a priori os erros envolvidos na solução numérica da equação de advecção unidimensional com termo fonte, através do Método dos Volumes Finitos em uma malha do tipo uniforme e uma malha não uniforme. Também faz-se um estudo a posteriori para verificar a magnitude do erro de discretização e corroborar os resultados obtidos através da análise a priori. Por meio da técnica de solução manufaturada tem-se uma solução analítica para o problema, a qual facilita a análise dos resultados numéricos encontrados, e estuda-se ainda a influência das funções de interpolação UDS e CDS e do parâmetro u na solução numérica.
Resumo:
Um Estudo para a solução numérica do modelo de difusão com retenção, proposta por Bevilacqua et al. (2011), é apresentado, bem como uma formulação implícita para o problema inverso para a estimativa dos parâmetros envolvidos na formulação matemática do modelo. Através de um estudo minucioso da análise de sensibilidade e do cálculo do coeficiente de correlação de Pearson, são identificadas as chances de se obter sucesso na solução do problema inverso através do método determinístico de Levenberg-Marquardt e dos métodos estocásticos Algoritmo de Colisão de Partículas (Particle Collision Algorithm - PCA) e Evolução Diferencial (Differential Evolution - DE). São apresentados os resultados obtidos através destes três métodos de otimização para três casos de conjunto de parâmetros. Foi observada uma forte correlação entre dois destes três parâmetros, o que dificultou a estimativa simultânea dos mesmos. Porém, foi obtido sucesso nas estimativas individuais de cada parâmetro. Foram obtidos bons resultados para os fatores que multiplicam os termos diferenciais da equação que modela o fenômeno de difusão com retenção.
Resumo:
The prediction of turbulent oscillatory flow at around transitional Reynolds numbers is considered for an idealized electronics system. To assess the accuracy of turbulence models, comparison is made with measurements. A stochastic procedure is used to recover instantaneous velocity time traces from predictions. This procedure enables more direct comparison with turbulence intensity measurements which have not been filtered to remove the oscillatory flow component. Normal wall distances, required in some turbulence models, are evaluated using a modified Poisson equation based technique. A range of zero, one and two equation turbulence models are tested, including zonal and a non-linear eddy viscosity models. The non-linear and zonal models showed potential for accuracy improvements.
Resumo:
研究了一种3自由度并联柔索驱动机器人精度分析和精度综合的问题.分析了影响机器人位姿精度的主要因素,推导建立了关节误差及柔索误差模型.提出了一种精度综合算法,并基于给定的机器人关节允差,综合分析出装配误差及柔索误差的最大取值.通过仿真验证了误差模型的正确性.样机试验表明,利用柔索误差模型可以提高机器人的运动精度.
Resumo:
基于特殊的测量环境需要,提出了用于平面运动位姿测量的并联组合测量方法。介绍了并联组合测量方法的测量机构组成和测量原理,并进行了可行性论证;通过建立误差模型,对几何误差源与原始测量参数的映射关系及其对最终位姿测量误差的影响进行了分析,仿真结果和实际应用测量数据验证了分析的正确性。所述并联组合测量方法构思新颖,结构合理,适用于具有一定特殊测量条件的高精度平面大范围运动过程位姿测量。如果在工程应用中有效地控制几何误差源的影响,该方法则具有一定的推广应用价值。
Resumo:
设计了一种新型三自由度位姿测量平面组合传感器装置,用于完成对平面运动的两个移动自由度和一个转动自由度的动态测量。介绍了传感器的机构构成和测量原理,利用微分法原理建立了误差模型,对误差产生原因进行分析,得出了机构误差对测量精度的影响曲线,试验和仿真验证了新型平面组合传感器机构的合理性。新型平面组合传感器机构简单,测量精度高,适用于特定环境下的高精度平面运动位姿测量。
Resumo:
Alignment is a prevalent approach for recognizing 3D objects in 2D images. A major problem with current implementations is how to robustly handle errors that propagate from uncertainties in the locations of image features. This thesis gives a technique for bounding these errors. The technique makes use of a new solution to the problem of recovering 3D pose from three matching point pairs under weak-perspective projection. Furthermore, the error bounds are used to demonstrate that using line segments for features instead of points significantly reduces the false positive rate, to the extent that alignment can remain reliable even in cluttered scenes.
Resumo:
A technique for optimizing the efficiency of the sub-map method for large-scale simultaneous localization and mapping (SLAM) is proposed. It optimizes the benefits of the sub-map technique to improve the accuracy and consistency of an extended Kalman filter (EKF)-based SLAM. Error models were developed and engaged to investigate some of the outstanding issues in employing the sub-map technique in SLAM. Such issues include the size (distance) of an optimal sub-map, the acceptable error effect caused by the process noise covariance on the predictions and estimations made within a sub-map, when to terminate an existing sub-map and start a new one and the magnitude of the process noise covariance that could produce such an effect. Numerical results obtained from the study and an error-correcting process were engaged to optimize the accuracy and convergence of the Invariant Information Local Sub-map Filter previously proposed. Applying this technique to the EKF-based SLAM algorithm (a) reduces the computational burden of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. A Monte Carlo analysis of the system is presented as a means of demonstrating the consistency and efficacy of the proposed technique.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.