981 resultados para noise filtering
Resumo:
The seismic survey is the most effective prospecting geophysical method during exploration and development of oil/gas. The structure and the lithology of the geological body become increasingly complex now. So it must assure that the seismic section own upper resolution if we need accurately describe the targets. High signal/noise ratio is the precondition of high-resolution. For the sake of improving signal/noise ratio, we put forward four methods for eliminating random noise on the basis of detailed analysis of the technique for noise elimination using prediction filtering in f-x-y domain. The four methods are put forward for settling different problems, which are in the technique for noise elimination using prediction filtering in f-x-y domain. For weak noise and large filters, the response of the noise to the filter is little. For strong noise and short filters, the response of the noise to the filter is important. For the response of the noise, the predicting operators are inaccurate. The inaccurate operators result in incorrect results. So we put forward the method using prediction filtering by inversion in f-x-y domain. The method makes the assumption that the seismic signal comprises predictable proportion and unpredictable proportion. The transcendental information about predicting operator is introduced in the function. The method eliminates the response of the noise to filtering operator, and assures that the filtering operators are accurate. The filtering results are effectively improved by the method. When the dip of the stratum is very complex, we generally divide the data into rectangular patches in order to obtain the predicting operators using prediction filtering in f-x-y domain. These patches usually need to have significant overlap in order to get a good result. The overlap causes that the data is repeatedly used. It effectively increases the size of the data. The computational cost increases with the size of the data. The computational efficiency is depressed. The predicting operators, which are obtained by general prediction filtering in f-x-y domain, can not describe the change of the dip when the dip of the stratum is very complex. It causes that the filtering results are aliased. And each patch is an independent problem. In order to settle these problems, we put forward the method for eliminating noise using space varying prediction filtering in f-x-y domain. The predicting operators accordingly change with space varying in this method. Therefore it eliminates the false event in the result. The transcendental information about predicting operator is introduced into the function. To obtain the predicting operators of each patch is no longer independent problem, but related problem. Thus it avoids that the data is repeatedly used, and improves computational efficiency. The random noise that is eliminated by prediction filtering in f-x-y domain is Gaussian noise. The general method can't effectively eliminate non-Gaussian noise. The prediction filtering method using lp norm (especially p=l) can effectively eliminate non-Gaussian noise in f-x-y domain. The method is described in this paper. Considering the dip of stratum can be accurately obtained, we put forward the method for eliminating noise using prediction filtering under the restriction of the dip in f-x-y domain. The method can effectively increase computational efficiency and improve the result. Through calculating in the theoretic model and applying it to the field data, it is proved that the four methods in this paper can effectively solve these different problems in the general method. Their practicability is very better. And the effect is very obvious.
Resumo:
The theory and approach of the broadband teleseismic body waveform inversion are expatiated in this paper, and the defining the crust structure's methods are developed. Based on the teleseismic P-wave data, the theoretic image of the P-wave radical component is calculated via the convolution of the teleseismic P-wave vertical component and the transform function, and thereby a P-wavefrom inversion method is built. The applied results show the approach effective, stable and its resolution high. The exact and reliable teleseismic P waveforms recorded by CDSN and IRIS and its geodynamics are utilized to obtain China and its vicinage lithospheric transfer functions, this region ithospheric structure is inverted through the inversion of reliable transfer functions, the new knowledge about the deep structure of China and its vicinage is obtained, and the reliable seismological evidence is provided to reveal the geodynamic evolution processes and set up the continental collisional theory. The major studies are as follows: Two important methods to study crustal and upper mantle structure -- body wave travel-time inversion and waveform modeling are reviewed systematically. Based on ray theory, travel-time inversion is characterized by simplicity, crustal and upper mantle velocity model can be obtained by using 1-D travel-time inversion preliminary, which introduces the reference model for studying focal location, focal mechanism, and fine structure of crustal and upper mantle. The large-scale lateral inhomogeneity of crustal and upper mantle can be obtained by three-dimensional t ravel-time seismic tomography. Based on elastic dynamics, through the fitting between theoretical seismogram and observed seismogram, waveform modeling can interpret the detail waveform and further uncover one-dimensional fine structure and lateral variation of crustal and upper mantle, especially the media characteristics of singular zones of ray. Whatever travel-time inversion and waveform modeling is supposed under certain approximate conditions, with respective advantages and disadvantages, and provide convincing structure information for elucidating physical and chemical features and geodynamic processes of crustal and upper mantle. Because the direct wave, surface wave, and refraction wave have lower resolution in investigating seismic velocity transitional zone, which is inadequate to study seismic discontinuities. On the contrary, both the converse and reflected wave, which sample the discontinuities directly, must be carefully picked up from seismogram to constrain the velocity transitional zones. Not only can the converse wave and reflected wave study the crustal structure, but also investigate the upper mantle discontinuities. There are a number of global and regional seismic discontinuities in the crustal and upper mantle, which plays a significant role in understanding physical and chemical properties and geodynamic processes of crustal and upper mantle. The broadband teleseismic P waveform inversion is studied particularly. The teleseismic P waveforms contain a lot of information related to source time function, near-source structure, propagation effect through the mantle, receiver structure, and instrument response, receiver function is isolated form teleseismic P waveform through the vector rotation of horizontal components into ray direction and the deconvolution of vertical component from the radial and tangential components of ground motion, the resulting time series is dominated by local receiver structure effect, and is hardly irrelevant to source and deep mantle effects. Receiver function is horizontal response, which eliminate multiple P wave reflection and retain direct wave and P-S converted waves, and is sensitive to the vertical variation of S wave velocity. Velocity structure beneath a seismic station has different response to radial and vertical component of an accident teleseismic P wave. To avoid the limits caused by a simplified assumption on the vertical response, the receiver function method is mended. In the frequency domain, the transfer function is showed by the ratio of radical response and vertical response of the media to P wave. In the time domain, the radial synthetic waveform can be obtained by the convolution of the transfer function with the vertical wave. In order to overcome the numerical instability, generalized reflection and transmission coefficient matrix method is applied to calculate the synthetic waveform so that all multi-reflection and phase conversion response can be included. A new inversion method, VFSA-LM method, is used in this study, which successfully combines very fast simulated annealing method (VFSA) with damped least square inversion method (LM). Synthetic waveform inversion test confirms its effectiveness and efficiency. Broadband teleseismic P waveform inversion is applied in lithospheric velocity study of China and its vicinage. According to the data of high quality CDSN and IRIS, we obtained an outline map showing the distribution of Asian continental crustal thickness. Based on these results gained, the features of distribution of the crustal thickness and outline of crustal structure under the Asian continent have been analyzed and studied. Finally, this paper advances the principal characteristics of the Asian continental crust. There exist four vast areas of relatively minor variations in the crustal thickness, namely, northern, eastern southern and central areas of Asian crust. As a byproduct, the earthquake location is discussed, Which is a basic issue in seismology. Because of the strong trade-off between the assumed initial time and focal depth and the nonlinear of the inversion problems, this issue is not settled at all. Aimed at the problem, a new earthquake location method named SAMS method is presented, In which, the objective function is the absolute value of the remnants of travel times together with the arrival times and use the Fast Simulated Annealing method is used to inverse. Applied in the Chi-Chi event relocation of Taiwan occurred on Sep 21, 2000, the results show that the SAMS method not only can reduce the effects of the trade-off between the initial time and focal depth, but can get better stability and resolving power. At the end of the paper, the inverse Q filtering method for compensating attenuation and frequency dispersion used in the seismic section of depth domain is discussed. According to the forward and inverse results of synthesized seismic records, our Q filtrating operator of the depth domain is consistent with the seismic laws in the absorbing media, which not only considers the effect of the media absorbing of the waves, but also fits the deformation laws, namely the frequency dispersion of the body wave. Two post stacked profiles about 60KM, a neritic area of China processed, the result shows that after the forward Q filtering of the depth domain, the wide of the wavelet of the middle and deep layers is compressed, the resolution and signal noise ratio are enhanced, and the primary sharp and energy distribution of the profile are retained.
Resumo:
With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.
Resumo:
Given n noisy observations g; of the same quantity f, it is common use to give an estimate of f by minimizing the function Eni=1(gi-f)2. From a statistical point of view this corresponds to computing the Maximum likelihood estimate, under the assumption of Gaussian noise. However, it is well known that this choice leads to results that are very sensitive to the presence of outliers in the data. For this reason it has been proposed to minimize the functions of the form Eni=1V(gi-f), where V is a function that increases less rapidly than the square. Several choices for V have been proposed and successfully used to obtain "robust" estimates. In this paper we show that, for a class of functions V, using these robust estimators corresponds to assuming that data are corrupted by Gaussian noise whose variance fluctuates according to some given probability distribution, that uniquely determines the shape of V.
Resumo:
We study the frequent problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving {\\em weighted} low rank approximation problems, which, unlike simple matrix factorization problems, do not admit a closed form solution in general. We analyze, in addition, the nature of locally optimal solutions that arise in this context, demonstrate the utility of accommodating the weights in reconstructing the underlying low rank representation, and extend the formulation to non-Gaussian noise models such as classification (collaborative filtering).
Resumo:
The goal of this thesis is to apply the computational approach to motor learning, i.e., describe the constraints that enable performance improvement with experience and also the constraints that must be satisfied by a motor learning system, describe what is being computed in order to achieve learning, and why it is being computed. The particular tasks used to assess motor learning are loaded and unloaded free arm movement, and the thesis includes work on rigid body load estimation, arm model estimation, optimal filtering for model parameter estimation, and trajectory learning from practice. Learning algorithms have been developed and implemented in the context of robot arm control. The thesis demonstrates some of the roles of knowledge in learning. Powerful generalizations can be made on the basis of knowledge of system structure, as is demonstrated in the load and arm model estimation algorithms. Improving the performance of parameter estimation algorithms used in learning involves knowledge of the measurement noise characteristics, as is shown in the derivation of optimal filters. Using trajectory errors to correct commands requires knowledge of how command errors are transformed into performance errors, i.e., an accurate model of the dynamics of the controlled system, as is demonstrated in the trajectory learning work. The performance demonstrated by the algorithms developed in this thesis should be compared with algorithms that use less knowledge, such as table based schemes to learn arm dynamics, previous single trajectory learning algorithms, and much of traditional adaptive control.
Resumo:
Automated assembly of mechanical devices is studies by researching methods of operating assembly equipment in a variable manner; that is, systems which may be configured to perform many different assembly operations are studied. The general parts assembly operation involves the removal of alignment errors within some tolerance and without damaging the parts. Two methods for eliminating alignment errors are discussed: a priori suppression and measurement and removal. Both methods are studied with the more novel measurement and removal technique being studied in greater detail. During the study of this technique, a fast and accurate six degree-of-freedom position sensor based on a light-stripe vision technique was developed. Specifications for the sensor were derived from an assembly-system error analysis. Studies on extracting accurate information from the sensor by optimally reducing redundant information, filtering quantization noise, and careful calibration procedures were performed. Prototype assembly systems for both error elimination techniques were implemented and used to assemble several products. The assembly system based on the a priori suppression technique uses a number of mechanical assembly tools and software systems which extend the capabilities of industrial robots. The need for the tools was determined through an assembly task analysis of several consumer and automotive products. The assembly system based on the measurement and removal technique used the six degree-of-freedom position sensor to measure part misalignments. Robot commands for aligning the parts were automatically calculated based on the sensor data and executed.
Resumo:
G. M. Coghill, S. M. Garrett and R. D. King (2002), Learning Qualitative Models in the Presence of Noise, QR'02 Workshop on Qualitative Reasoning
IDENTIFYING AND MONITORING THE ROLES OF CAVITATION IN HEATING FROM HIGH-INTENSITY FOCUSED ULTRASOUND
Resumo:
For high-intensity focused ultrasound (HIFU) to continue to gain acceptance for cancer treatment it is necessary to understand how the applied ultrasound interacts with gas trapped in the tissue. The presence of bubbles in the target location have been thought to be responsible for shielding the incoming pressure and increasing local heat deposition due to the bubble dynamics. We lack adequate tools for monitoring the cavitation process, due to both limited visualization methods and understanding of the underlying physics. The goal of this project was to elucidate the role of inertial cavitation in HIFU exposures in the hope of applying noise diagnostics to monitor cavitation activity and control HIFU-induced cavitation in a beneficial manner. A number of approaches were taken to understand the relationship between inertial cavitation signals, bubble heating, and bubble shielding in agar-graphite tissue phantoms. Passive cavitation detection (PCD) techniques were employed to detect inertial bubble collapses while the temperature was monitored with an embedded thermocouple. Results indicate that the broadband noise amplitude is correlated to bubble-enhanced heating. Monitoring inertial cavitation at multiple positions throughout the focal region demonstrated that bubble activity increased prefocally as it diminished near the focus. Lowering the HIFU duty cycle had the effect of maintaining a more or less constant cavitation signal, suggesting the shielding effect diminished when the bubbles had a chance to dissolve during the HIFU off-time. Modeling the effect of increasing the ambient temperature showed that bubbles do not collapse as violently at higher temperatures due to increased vapor pressure inside the bubble. Our conclusion is that inertial cavitation heating is less effective at higher temperatures and bubble shielding is involved in shifting energy deposition at the focus. The use of a diagnostic ultrasound imaging system as a PCD array was explored. Filtering out the scattered harmonics from the received RF signals resulted in a spatially- resolved inertial cavitation signal, while the amplitude of the harmonics showed a correlation with temperatures approaching the onset of boiling. The result is a new tool for detecting a broader spectrum of bubble activity and thus enhancing HIFU treatment visualization and feedback.
Resumo:
Cells are known to utilize biochemical noise to probabilistically switch between distinct gene expression states. We demonstrate that such noise-driven switching is dominated by tails of probability distributions and is therefore exponentially sensitive to changes in physiological parameters such as transcription and translation rates. However, provided mRNA lifetimes are short, switching can still be accurately simulated using protein-only models of gene expression. Exponential sensitivity limits the robustness of noise-driven switching, suggesting cells may use other mechanisms in order to switch reliably.
Resumo:
Statistical properties offast-slow Ellias-Grossberg oscillators are studied in response to deterministic and noisy inputs. Oscillatory responses remain stable in noise due to the slow inhibitory variable, which establishes an adaptation level that centers the oscillatory responses of the fast excitatory variable to deterministic and noisy inputs. Competitive interactions between oscillators improve the stability in noise. Although individual oscillation amplitudes decrease with input amplitude, the average to'tal activity increases with input amplitude, thereby suggesting that oscillator output is evaluated by a slow process at downstream network sites.
Resumo:
A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.
Resumo:
A neural model is presented of how cortical areas V1, V2, and V4 interact to convert a textured 2D image into a representation of curved 3D shape. Two basic problems are solved to achieve this: (1) Patterns of spatially discrete 2D texture elements are transformed into a spatially smooth surface representation of 3D shape. (2) Changes in the statistical properties of texture elements across space induce the perceived 3D shape of this surface representation. This is achieved in the model through multiple-scale filtering of a 2D image, followed by a cooperative-competitive grouping network that coherently binds texture elements into boundary webs at the appropriate depths using a scale-to-depth map and a subsequent depth competition stage. These boundary webs then gate filling-in of surface lightness signals in order to form a smooth 3D surface percept. The model quantitatively simulates challenging psychophysical data about perception of prolate ellipsoids (Todd and Akerstrom, 1987, J. Exp. Psych., 13, 242). In particular, the model represents a high degree of 3D curvature for a certain class of images, all of whose texture elements have the same degree of optical compression, in accordance with percepts of human observers. Simulations of 3D percepts of an elliptical cylinder, a slanted plane, and a photo of a golf ball are also presented.