973 resultados para Correction method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In correlation filtering we attempt to remove that component of the aeromagnetic field which is closely related to the topography. The magnetization vector is assumed to be spatially variable, but it can be successively estimated under the additional assumption that the magnetic component due to topography is uncorrelated with the magnetic signal of deeper origin. The correlation filtering was tested against a synthetic example. The filtered field compares very well with the known signal of deeper origin. We have also applied this method to real data from the south Indian shield. It is demonstrated that the performance of the correlation filtering is superior in situations where the direction of magnetization is variable, for example, where the remnant magnetization is dominant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a pressure correction algorithm for computing incompressible flows is modified and implemented on unstructured Chimera grid. Schwarz method is used to couple the solutions of different sub-domains. A new interpolation to ensure consistency between primary variables and auxiliary variables is proposed. Other important issues such as global mass conservation and order of accuracy in the interpolations are also discussed. Two numerical simulations are successfully performed. They include one steady case, the lid-driven cavity and one unsteady case, the flow around a circular cylinder. The results demonstrate a very good performance of the proposed scheme on unstructured Chimera grids. It prevents the decoupling of pressure field in the overlapping region and requires only little modification to the existing unstructured Navier–Stokes (NS) solver. The numerical experiments show the reliability and potential of this method in applying to practical problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, an unstructured Chimera mesh method is used to compute incompressible flow around a rotating body. To implement the pressure correction algorithm on unstructured overlapping sub-grids, a novel interpolation scheme for pressure correction is proposed. This indirect interpolation scheme can ensure a tight coupling of pressure between sub-domains. A moving-mesh finite volume approach is used to treat the rotating sub-domain and the governing equations are formulated in an inertial reference frame. Since the mesh that surrounds the rotating body undergoes only solid body rotation and the background mesh remains stationary, no mesh deformation is encountered in the computation. As a benefit from the utilization of an inertial frame, tensorial transformation for velocity is not needed. Three numerical simulations are successfully performed. They include flow over a fixed circular cylinder, flow over a rotating circular cylinder and flow over a rotating elliptic cylinder. These numerical examples demonstrate the capability of the current scheme in handling moving boundaries. The numerical results are in good agreement with experimental and computational data in literature. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sound waves are propagating pressure fluctuations and are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier-Stokes (RANS) equations do not resolve the acoustic fluctuations. Direct numerical simulation of turbulent flow is still a prohibitively expensive tool to perform noise analysis. This paper proposes the acousticcorrectionmethod, an alternative and affordable tool based on a modified defect correction concept, which leads to an efficient algorithm for computational aeroacoustics and noise analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An aerodynamic sound source extraction from a general flow field is applied to a number of model problems and to a problem of engineering interest. The extraction technique is based on a variable decomposition, which results to an acoustic correction method, of each of the flow variables into a dominant flow component and a perturbation component. The dominant flow component is obtained with a general-purpose Computational Fluid Dynamics (CFD) code which uses a cell-centred finite volume method to solve the Reynolds-averaged Navier–Stokes equations. The perturbations are calculated from a set of acoustic perturbation equations with source terms extracted from unsteady CFD solutions at each time step via the use of a staggered dispersion-relation-preserving (DRP) finite-difference scheme. Numerical experiments include (1) propagation of a 1-D acoustic pulse without mean flow, (2) propagation of a 2-D acoustic pulse with/without mean flow, (3) reflection of an acoustic pulse from a flat plate with mean flow, and (4) flow-induced noise generated by the an unsteady laminar flow past a 2-D cavity. The computational results demonstrate the accuracy for model problems and illustrate the feasibility for more complex aeroacoustic problems of the source extraction technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The document images that are fed into an Optical Character Recognition system, might be skewed. This could be due to improper feeding of the document into the scanner or may be due to a faulty scanner. In this paper, we propose a skew detection and correction method for document images. We make use of the inherent randomness in the Horizontal Projection profiles of a text block image, as the skew of the image varies. The proposed algorithm has proved to be very robust and time efficient. The entire process takes less than a second on a 2.4 GHz Pentium IV PC.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, a beamforming correction for identifying dipole sources by means of phased microphone array measurements is presented and implemented numerically and experimentally. Conventional beamforming techniques, which are developed for monopole sources, can lead to significant errors when applied to reconstruct dipole sources. A previous correction technique to microphone signals is extended to account for both source location and source power for two-dimensional microphone arrays. The new dipole-beamforming algorithm is developed by modifying the basic source definition used for beamforming. This technique improves the previous signal correction method and yields a beamformer applicable to sources which are suspected to be dipole in nature. Numerical simulations are performed, which validate the capability of this beamformer to recover ideal dipole sources. The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers. © 2008 Acoustical Society of America.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Correction of spectral overlap interference in inductively coupled plasma atomic emission spectrometry by factor analysis is attempted. For the spectral overlap of two known lines, a data matrix can be composed from one or two pure spectra and a spectrum of the mixture. The data matrix is decomposed into a spectra matrix and a concentration matrix by target transformation factor analysis. The component concentration of interest in a binary mixture is obtained from the concentration matrix and interference from the other component is eliminated. This method is applied to correcting spectral interference of yttrium on the determination of copper and aluminium: satisfactory results are obtained. This method may also be applied to correcting spectral overlap interference for more than two lines. Like other methods of correcting spectral interferences, factor analysis can only be used for additive spectral overlap. Results obtained from measurements on copper/yttrium mixtures with different white noise added show that random errors in measurement data do not significantly affect the results of the correction method.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

While beneficially decreasing the necessary incision size, arthroscopic hip surgery increases the surgical complexity due to loss of joint visibility. To ease such difficulty, a computer-aided mechanical navigation system was developed to present the location of the surgical tool relative to the patient¿s hip joint. A preliminary study reduced the position error of the tracking linkage with limited static testing trials. In this study, a correction method, including a rotational correction factor and a length correction function, was developed through more in-depth static testing. The developed correction method was then applied to additional static and dynamic testing trials to evaluate its effectiveness. For static testing, the position error decreased from an average of 0.384 inches to 0.153 inches, with an error reduction of 60.5%. Three parameters utilized to quantify error reduction of dynamic testing did not show consistent results. The vertex coordinates achieved 29.4% of error reduction, yet with large variation in the upper vertex. The triangular area error was reduced by 5.37%, however inconsistent among all five dynamic trials. Error of vertex angles increased, indicating a shape torsion using the developed correction method. While the established correction method effectively and consistently reduced position error in static testing, it did not present consistent results in dynamic trials. More dynamic paramters should be explored to quantify error reduction of dynamic testing, and more in-depth dynamic testing methodology should be conducted to further improve the accuracy of the computer-aided nagivation system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.