973 resultados para Iterative Closest Point (ICP) Algorithm
Resumo:
Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.
Resumo:
O descarte ou reutilização da água produzida da indústria do petróleo é difícil por causa dos impactos ambientais causados devido à presença de alta salinidade e componentes tóxicos, ou pelo risco de obstrução nas colunas de produção devido à formação de incrustações que causam redução na produção de petróleo e enormes perdas no processo de extração. Assim, o conhecimento da composição química da água produzida é muito importante. O método proposto por este trabalho visa a determinação de elementos traço (Co, Cr, Fe, Mn, Ni, Se e V) em amostras de água produzida de petróleo por espectrometria de emissão óptica com plasma indutivamente acoplado (ICP OES) utilizando a digestão ácida assistida por micro-ondas para o preparo das amostras (15 g de amostra e 2 mL de HNO3 concentrado). A curva analítica construída em HNO3 2% v v-1 foi adotada para o método após verificar que não é necessário o uso de salinidade para equiparação de matriz. Para o elemento Ni, não há necessidade do uso de padrão interno, para os elementos Co, Cr, Fe, Mn e V os melhores resultados foram obtidos usando Sc como padrão interno. Para o elemento Se é recomendado o uso de Y como padrão interno. Os limites de detecção obtidos foram Co 0,67, Cr 1,2, Fe 2,3, Mn 0,49, Ni 1,9, Se 3,7 e V 5,5 μg L-1; e os limites de quantificação foram Co 2,2, Cr 4,0, Fe 7,7, Mn 1,6, Ni 6,5, Se 12,4 e V 18,3 μg L-1. A exatidão do procedimento foi verificada através de testes de recuperação em dois níveis de concentração (40 e 80 μg L-1) e análise dos materiais certificados de referência de água estuarina SLEW-2 e de água do mar NASS-5. Bons valores de recuperação foram obtidos e não houve diferença significativa (95% de confiança) entre os resultados obtidos e os valores certificados dos materiais de referência.
Resumo:
Os organoestânicos, principalmente o tributilestanho (TBT), são contaminantes ambientais, utilizados principalmente em tintas anti-incrustantes para navios. Eles sofrem bioacumulação e podem ser encontrados em mamíferos, inclusive em seres humanos. A principal fonte de exposição é a ingestão de alimentos contaminados. Esse estudo possuiu como objetivo a determinação de estanho em tecidos de ratas expostas cronicamente a TBT utilizando espectrometria de massas com plasma acoplado indutivamente (ICP-MS). Ratas Wistar adultas foram usadas na experimentação, sendo divididas dois grupos: o grupo exposto a 100 ng kg-1 dia-1 de TBT por 15 dias e o grupo de referência que recebeu somente o veículo durante o mesmo período de exposição. Ao final da exposição, os animais foram sacrificados e coletados plasma, coração, rim, pulmão, fígado e ovário para análise. As amostras foram secas em estufa por 72 horas e pulverizadas. A determinação de estanho foi realizada por ICP-MS após a digestão ácida assistida por micro-ondas de uma amostra com aproximadamente 100 mg. O limite de detecção (LD) calculado foi 4,3 ng L-1, o que permite a determinação de estanho em amostras de tecidos de animais usados para experimentação. A exatidão foi verificada pela análise do material de referência certificado de urina, Seronorm Urine (54,6 ± 2,7 μg L-1), tendo como resultado 50,1 ± 3,8 μg L-1. A concentração de estanho foi determinada em amostras de plasma, coração, rim, pulmão, fígado e ovário do grupo exposto a TBT e do grupo controle. Houve diferença estatisticamente significativa entre os dois grupos para todas as amostras analisadas. As diferenças entre os grupos foram mais pronunciadas nas amostras de fígado e rim. Além disso, este estudo mostrou que a presença de estanho no organismo de ratas distribui-se pelos tecidos acarretando em alterações morfofisiológicas já descritas em ovários, coração e fígado
Resumo:
Polymers have become the reference material for high reliability and performance applications. In this work, a multi-scale approach is proposed to investigate the mechanical properties of polymeric based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, a coupling of a Finite Element Method (FEM) and Molecular Dynamics (MD) modeling in an iterative procedure was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, the previous described multi-scale method computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multi-scale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
A numerical comparison is performed between three methods of third order with the same structure, namely BSC, Halley’s and Euler–Chebyshev’s methods. As the behavior of an iterative method applied to a nonlinear equation can be highly sensitive to the starting points, the numerical comparison is carried out, allowing for complex starting points and for complex roots, on the basins of attraction in the complex plane. Several examples of algebraic and transcendental equations are presented.
Resumo:
The radial undistortion model proposed by Fitzgibbon and the radial fundamental matrix were early steps to extend classical epipolar geometry to distorted cameras. Later minimal solvers have been proposed to find relative pose and radial distortion, given point correspondences between images. However, a big drawback of all these approaches is that they require the distortion center to be exactly known. In this paper we show how the distortion center can be absorbed into a new radial fundamental matrix. This new formulation is much more practical in reality as it allows also digital zoom, cropped images and camera-lens systems where the distortion center does not exactly coincide with the image center. In particular we start from the setting where only one of the two images contains radial distortion, analyze the structure of the particular radial fundamental matrix and show that the technique also generalizes to other linear multi-view relationships like trifocal tensor and homography. For the new radial fundamental matrix we propose different estimation algorithms from 9,10 and 11 points. We show how to extract the epipoles and prove the practical applicability on several epipolar geometry image pairs with strong distortion that - to the best of our knowledge - no other existing algorithm can handle properly.
Resumo:
Many organisations need to extract useful information from huge amounts of movement data. One example is found in maritime transportation, where the automated identification of a diverse range of traffic routes is a key management issue for improving the maintenance of ports and ocean routes, and accelerating ship traffic. This paper addresses, in a first stage, the research challenge of developing an approach for the automated identification of traffic routes based on clustering motion vectors rather than reconstructed trajectories. The immediate benefit of the proposed approach is to avoid the reconstruction of trajectories in terms of their geometric shape of the path, their position in space, their life span, and changes of speed, direction and other attributes over time. For clustering the moving objects, an adapted version of the Shared Nearest Neighbour algorithm is used. The motion vectors, with a position and a direction, are analysed in order to identify clusters of vectors that are moving towards the same direction. These clusters represent traffic routes and the preliminary results have shown to be promising for the automated identification of traffic routes with different shapes and densities, as well as for handling noise data.
Resumo:
Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which several ribs and the sternum grow abnormally. Nowadays, the surgical correction is carried out in children and adults through Nuss technic. This technic has been shown to be safe with major drivers as cosmesis and the prevention of psychological problems and social stress. Nowadays, no application is known to predict the cosmetic outcome of the pectus excavatum surgical correction. Such tool could be used to help the surgeon and the patient in the moment of deciding the need for surgery correction. This work is a first step to predict postsurgical outcome in pectus excavatum surgery correction. Facing this goal, it was firstly determined a point cloud of the skin surface along the thoracic wall using Computed Tomography (before surgical correction) and the Polhemus FastSCAN (after the surgical correction). Then, a surface mesh was reconstructed from the two point clouds using a Radial Basis Function algorithm for further affine registration between the meshes. After registration, one studied the surgical correction influence area (SCIA) of the thoracic wall. This SCIA was used to train, test and validate artificial neural networks in order to predict the surgical outcome of pectus excavatum correction and to determine the degree of convergence of SCIA in different patients. Often, ANN did not converge to a satisfactory solution (each patient had its own deformity characteristics), thus invalidating the creation of a mathematical model capable of estimating, with satisfactory results, the postsurgical outcome
Resumo:
Polymeric materials have become the reference material for high reliability and performance applications. However, their performance in service conditions is difficult to predict, due in large part to their inherent complex morphology, which leads to non-linear and anisotropic behavior, highly dependent on the thermomechanical environment under which it is processed. In this work, a multiscale approach is proposed to investigate the mechanical properties of polymeric-based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, the coupling of a finite element method (FEM) and molecular dynamics (MD) modeling, in an iterative procedure, was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, this multiscale approach computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multiscale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
O objectivo do presente trabalho foi desenvolver, implementar e validar métodos de determinação de teor de cálcio (Ca), magnésio (Mg), sódio (Na), potássio (K) e fósforo (P) em biodiesel, por ICP-OES. Este método permitiu efectuar o controlo de qualidade do biodiesel, com a vantagem de proporcionar uma análise multi-elementar, reflectindo-se numa diminuição do tempo de análise. Uma vez que o biodiesel é uma das principais fontes de energia renovável e alternativa ao diesel convencional, este tipo de análises revela-se extremamente útil para a sua caracterização. De acordo com a análise quantitativa e qualitativa e após a validação dos respectivos ensaios, apresentam-se, na Tabela 1 as condições optimizadas para cada elemento em estudo. As condições de trabalho do ICP-OES foram escolhidas tendo em conta as características do elemento em estudo, o tipo de equipamento utilizado para a sua análise, e de modo a obter a melhor razão sinal/intensidade de fundo. Para a validação dos ensaios foram efectuados ensaios de recuperação, determinados limites de detecção e quantificação, ensaios de repetibilidade e reprodutibilidade, e verificação das curvas de calibração. Na tabela 2 apresentam-se os comprimentos de onda escolhidos (livres de interferências) e respectivos limites de detecção e quantificação dos elementos analisados por ICP-OES, na posição radial e radial atenuado.
Resumo:
5th. European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2008) 8th. World Congress on Computational Mechanics (WCCM8)
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.