967 resultados para Point interpolation method
Resumo:
Atualmente, sensores remotos e computadores de alto desempenho estão sendo utilizados como instrumentos principais na coleta e produção de dados oceanográficos. De posse destes dados, é possível realizar estudos que permitem simular e prever o comportamento do oceano por meio de modelos numéricos regionais. Dentre os fatores importantes no estudo da oceanografia, podem ser destacados àqueles referentes aos impactos ambientais, de contaminação antrópica, utilização de energias renováveis, operações portuárias e etc. Contudo, devido ao grande volume de dados gerados por instituições ambientais, na forma de resultados de modelos globais como o HYCOM (Hybrid Coordinate Ocean Model) e dos programas de Reanalysis da NOAA (National Oceanic and Atmospheric Administration), torna-se necessária a criação de rotinas computacionais para realizar o tratamento de condições iniciais e de contorno, de modo que possam ser aplicadas a modelos regionais como o TELEMAC3D (www.opentelemac.org). Problemas relacionados a baixa resolução, ausência de dados e a necessidade de interpolação para diferentes malhas ou sistemas de coordenadas verticais, tornam necessária a criação de um mecanismo computacional que realize este tratamento adequadamente. Com isto, foram desenvolvidas rotinas na linguagem de programação Python, empregando interpoladores de vizinho mais próximo, de modo que, a partir de dados brutos dos modelos HYCOM e do programa de Reanalysis da NOAA, foram preparadas condições iniciais e de contorno para a realização de uma simulação numérica teste. Estes resultados foram confrontados com outro resultado numérico onde, as condições foram construídas a partir de um método de interpolação mais sofisticado, escrita em outra linguagem, e que já vem sendo utilizada no laboratório. A análise dos resultados permitiu concluir que, a rotina desenvolvida no âmbito deste trabalho, funciona adequadamente para a geração de condições iniciais e de contorno do modelo TELEMAC3D. Entretanto, um interpolador mais sofisticado deve ser desenvolvido de forma a aumentar a qualidade nas interpolações, otimizar o custo computacional, e produzir condições que sejam mais realísticas para a utilização do modelo TELEMAC3D.
Resumo:
Levelling and trigonometric height measurements are the methods that are mostly used today for height determination, as the standard error with these methods is in the magnitude of millimeters, as long as the view length is less than 50 m. When creating a new construction map the requirement on standard error differ from 1 (Fredriksson, 2011) to 10 cm (www.arvidsjaur.se) depending on which municipality it concerns. When using network RTK for measuring, the accuracy in height can fall below 3 cm when the conditions are optimal. The purpose of this paper is to investigate if network RTK can be used as an alternative to determinate height when accuracy under 10 cm is requested. Five points at locations with different conditions for accuracy got their height determined with the three methods mentioned above. Positional accuracy was formed for each point and method. The result from levelling was used as reference for the calculations. To compare the result with the requirements extended standard uncertainty, 2covering 95 %, was used. The result from trigonometric height measurement shows a position accuracy of 4 mm. From the network RTK, the points that were positioned without interference got a positional accuracy of 3.3 to 5.5 cm, while the points that were influenced by their environment, multipath interference and obstructions, got a positional accuracy of 123.3 cm and 234.4 cm. Positional accuracy of this method became 127.4 cm. The result from the height determination with network RTK shows big difference in accuracy for the different points. The conclusion is that network RTK measurement would not be a sufficiently accurate height determination method for preparation of a new construction map in an area similar to the one used for this test. Conversely, a construction map drawn up in an open area free from interference obstacles the results show that the network RTK is an approved method for determining height, depending on the requirements of the municipality.
Resumo:
In questa tesi tratteremo una variante della fattorizzazione CUR di una matrice data, ottenuta attraverso l'algoritmo DEIM ("discrete empirical interpolation method") a confronto con un metodo ampiamente usato in letteratura, il metodo dei Leverage Score. A tal fine verrà anche trattato un metodo per ottenere la fattorizzazione QR di una matrice in maniera incrementale. Verrà illustrato il comportamento degli algoritmi sviluppati su due esempi applicativi.
Resumo:
Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
OBJECTIVE: To assess the iodine status of Swiss population groups and to evaluate the influence of iodized salt as a vector for iodine fortification. DESIGN: The relationship between 24 h urinary iodine and Na excretions was assessed in the general population after correcting for confounders. Single-day intakes were estimated assuming that 92 % of dietary iodine was excreted in 24 h urine. Usual intake distributions were derived for male and female population groups after adjustment for within-subject variability. The estimated average requirement (EAR) cut-point method was applied as guidance to assess the inadequacy of the iodine supply. SETTING: Public health strategies to reduce the dietary salt intake in the general population may affect its iodine supply. SUBJECTS: The study population (1481 volunteers, aged ≥15 years) was randomly selected from three different linguistic regions of Switzerland. RESULTS: The 24 h urine samples from 1420 participants were determined to be properly collected. Mean iodine intakes obtained for men (n 705) and women (n 715) were 179 (sd 68.1) µg/d and 138 (sd 57.8) µg/d, respectively. Urinary Na and Ca, and BMI were significantly and positively associated with higher iodine intake, as were men and non-smokers. Fifty-four per cent of the total iodine intake originated from iodized salt. The prevalence of inadequate iodine intake as estimated by the EAR cut-point method was 2 % for men and 14 % for women. CONCLUSIONS: The estimated prevalence of inadequate iodine intake was within the optimal target range of 2-3 % for men, but not for women.
Resumo:
The research on language equations has been active during last decades. Compared to the equations on words the equations on languages are much more difficult to solve. Even very simple equations that are easy to solve for words can be very hard for languages. In this thesis we study two of such equations, namely commutation and conjugacy equations. We study these equations on some limited special cases and compare some of these results to the solutions of corresponding equations on words. For both equations we study the maximal solutions, the centralizer and the conjugator. We present a fixed point method that we can use to search these maximal solutions and analyze the reasons why this method is not successful for all languages. We give also several examples to illustrate the behaviour of this method.
Resumo:
A new cloud point extraction (CPE) method was developed for the separation and preconcentration of copper (II) prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl) azonapthalen-2-ol (Sudan II) was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114) was used as an extracting agent in the presence of sodium dodecylsulphate (SDS). After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.
Resumo:
The quantitative structure property relationship (QSPR) for the boiling point (Tb) of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) was investigated. The molecular distance-edge vector (MDEV) index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR) and artificial neural network (ANN), respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE) of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.
Resumo:
In order to verify Point-Centered Quarter Method (PCQM) accuracy and efficiency, using different numbers of individuals by per sampled area, in 28 quarter points in an Araucaria forest, southern Paraná, Brazil. Three variations of the PCQM were used for comparison associated to the number of sampled individual trees: standard PCQM (SD-PCQM), with four sampled individuals by point (one in each quarter), second measured (VAR1-PCQM), with eight sampled individuals by point (two in each quarter), and third measuring (VAR2-PCQM), with 16 sampled individuals by points (four in each quarter). Thirty-one species of trees were recorded by the SD-PCQM method, 48 by VAR1-PCQM and 60 by VAR2-PCQM. The level of exhaustiveness of the vegetation census and diversity index showed an increasing number of individuals considered by quadrant, indicating that VAR2-PCQM was the most accurate and efficient method when compared with VAR1-PCQM and SD-PCQM.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
Ausgangspunkt der Dissertation ist ein von V. Maz'ya entwickeltes Verfahren, eine gegebene Funktion f : Rn ! R durch eine Linearkombination fh radialer glatter exponentiell fallender Basisfunktionen zu approximieren, die im Gegensatz zu den Splines lediglich eine näherungsweise Zerlegung der Eins bilden und somit ein für h ! 0 nicht konvergentes Verfahren definieren. Dieses Verfahren wurde unter dem Namen Approximate Approximations bekannt. Es zeigt sich jedoch, dass diese fehlende Konvergenz für die Praxis nicht relevant ist, da der Fehler zwischen f und der Approximation fh über gewisse Parameter unterhalb der Maschinengenauigkeit heutiger Rechner eingestellt werden kann. Darüber hinaus besitzt das Verfahren große Vorteile bei der numerischen Lösung von Cauchy-Problemen der Form Lu = f mit einem geeigneten linearen partiellen Differentialoperator L im Rn. Approximiert man die rechte Seite f durch fh, so lassen sich in vielen Fällen explizite Formeln für die entsprechenden approximativen Volumenpotentiale uh angeben, die nur noch eine eindimensionale Integration (z.B. die Errorfunktion) enthalten. Zur numerischen Lösung von Randwertproblemen ist das von Maz'ya entwickelte Verfahren bisher noch nicht genutzt worden, mit Ausnahme heuristischer bzw. experimenteller Betrachtungen zur sogenannten Randpunktmethode. Hier setzt die Dissertation ein. Auf der Grundlage radialer Basisfunktionen wird ein neues Approximationsverfahren entwickelt, welches die Vorzüge der von Maz'ya für Cauchy-Probleme entwickelten Methode auf die numerische Lösung von Randwertproblemen überträgt. Dabei werden stellvertretend das innere Dirichlet-Problem für die Laplace-Gleichung und für die Stokes-Gleichungen im R2 behandelt, wobei für jeden der einzelnen Approximationsschritte Konvergenzuntersuchungen durchgeführt und Fehlerabschätzungen angegeben werden.