976 resultados para Iterative methods (mathematics)
Resumo:
This paper addresses the estimation of the code-phase(pseudorange) and the carrier-phase of the direct signal received from a direct-sequence spread-spectrum satellite transmitter. Thesignal is received by an antenna array in a scenario with interferenceand multipath propagation. These two effects are generallythe limiting error sources in most high-precision positioning applications.A new estimator of the code- and carrier-phases is derivedby using a simplified signal model and the maximum likelihood(ML) principle. The simplified model consists essentially ofgathering all signals, except for the direct one, in a component withunknown spatial correlation. The estimator exploits the knowledgeof the direction-of-arrival of the direct signal and is much simplerthan other estimators derived under more detailed signal models.Moreover, we present an iterative algorithm, that is adequate for apractical implementation and explores an interesting link betweenthe ML estimator and a hybrid beamformer. The mean squarederror and bias of the new estimator are computed for a numberof scenarios and compared with those of other methods. The presentedestimator and the hybrid beamforming outperform the existingtechniques of comparable complexity and attains, in manysituations, the Cramér–Rao lower bound of the problem at hand.
Resumo:
In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
The computational approach to the Hirshfeld [Theor. Chim. Acta 44, 129 (1977)] atom in a molecule is critically investigated, and several difficulties are highlighted. It is shown that these difficulties are mitigated by an alternative, iterative version, of the Hirshfeld partitioning procedure. The iterative scheme ensures that the Hirshfeld definition represents a mathematically proper information entropy, allows the Hirshfeld approach to be used for charged molecules, eliminates arbitrariness in the choice of the promolecule, and increases the magnitudes of the charges. The resulting "Hirshfeld-I charges" correlate well with electrostatic potential derived atomic charges
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
Sociable robots are embodied agents that are part of a heterogeneous society of robots and humans. They Should be able to recognize human beings and each other, and to engage in social, interactions. The use of a robotic architecture may strongly reduce the time and effort required to construct a sociable robot. Such architecture must have structures and mechanisms to allow social interaction. behavior control and learning from environment. Learning processes described oil Science of Behavior Analysis may lead to the development of promising methods and Structures for constructing robots able to behave socially and learn through interactions from the environment by a process of contingency learning. In this paper, we present a robotic architecture inspired from Behavior Analysis. Methods and structures of the proposed architecture, including a hybrid knowledge representation. are presented and discussed. The architecture has been evaluated in the context of a nontrivial real problem: the learning of the shared attention, employing an interactive robotic head. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human and the environment. The obtained results show that the robotic architecture is able to produce appropriate behavior and to learn from social interaction. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Foram ajustadas 7239 curvas de lactação de vacas Caracu, controladas semanalmente entre os anos de 1978 a 1988, pertencentes à Fazenda Chiqueirão, Poços de Caldas, MG. As funções utilizadas foram a linear hiperbólica (FLH), a quadrática logarítmica (FQL), a gama incompleta (FGI) e a polinomial inversa (FPI). Os parâmetros foram estimados por meio de regressões não lineares, usando-se processos iterativos. A verificação da qualidade do ajuste baseou-se no coeficiente de determinação ajustado (R²A), no teste de Durbin-Watson (DW) e nas médias e desvios-padrão estimados para os parâmetros e funções dos parâmetros dos modelos. Para a curva média, os R²A foram superiores a 0,90 para todas as funções. Bons ajustes, baseados nos R²A>0,80 foram obtidos, respectivamente, por 25,2%, 39,1%, 31,1% e 28,4% das lactações ajustadas pelas funções FLH, FQL, FGI e FPI. de acordo com o teste de DW, bons ajustes foram proporcionados para 29,4% das lactações ajustadas pela FLH, 54,9% pela FQL, 34,9% pela FGI e 29,6% pela FPI. Para ambos os critérios, a FQL foi superior às demais funções, indicando grande variação nas formas das curvas de lactação geradas pelos ajustes individuais. Curvas atípicas foram estimadas pelas funções, com picos ocorrendo antes do parto e algumas vezes após o término da lactação. Todas as funções apresentaram problemas quando ajustaram dados individuais.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper discusses the application of a damage detection methodology to monitor the location and extent of partial structural damage. The methodology combines, in an iterative way, the model updating technique based on frequency response functions (FRF) with monitoring data aiming at identifying the damage area of the structure. After the updating procedure reaches a good correlation between the models, it compares the parameters of the damage structure with those of the undamaged one to find the deteriorated area. The influence of the FEM mesh size on the evaluation of the extent of the damage has also been discussed. The methodology is applied using real experimental data from a spatial frame structure.
Resumo:
A fourth-order numerical method for solving the Navier-Stokes equations in streamfunction/vorticity formulation on a two-dimensional non-uniform orthogonal grid has been tested on the fluid flow in a constricted symmetric channel. The family of grids is generated algebraically using a conformal transformation followed by a non-uniform stretching of the mesh cells in which the shape of the channel boundary can vary from a smooth constriction to one which one possesses a very sharp but smooth corner. The generality of the grids allows the use of long channels upstream and downstream as well as having a refined grid near the sharp corner. Derivatives in the governing equations are replaced by fourth-order central differences and the vorticity is eliminated, either before or after the discretization, to form a wide difference molecule for the streamfunction. Extra boundary conditions, necessary for wide-molecule methods, are supplied by a procedure proposed by Henshaw et al. The ensuing set of non-linear equations is solved using Newton iteration. Results have been obtained for Reynolds numbers up to 250 for three constrictions, the first being smooth, the second having a moderately sharp corner and the third with a very sharp corner. Estimates of the error incurred show that the results are very accurate and substantially better than those of the corresponding second-order method. The observed order of the method has been shown to be close to four, demonstrating that the method is genuinely fourth-order. © 1977 John Wiley & Sons, Ltd.
Resumo:
Planning hot forging processes is a time-consuming activity with high costs involved because of the trial-and-error iterative methods used to design dies and to choose equipment and process conditions. Some processes demand many months to produce forged parts with controlled shapes, dimensions and microstructure. This paper shows how expert systems can help engineers to reduce the time needed to design precision forged parts and dies from machined parts. The software ADHFD interfacing MS Visual Basic v.5.0 and SolidEdge v.3.0 was used to design flashless hot forged gears, chosen from families of gears. © 1998 Elsevier Science S.A. All rights reserved.