921 resultados para least square minimization


Relevância:

80.00% 80.00%

Publicador:

Resumo:

As a fast and effective method for approximate calculation of seismic numerical simulation, ray tracing method, which has important theory and practical application value, in terms of seismic theory and seismic simulation, inversion, migration, imaging, simplified from seismic theory according to geometric seismic, means that the main energy of seismic wave field propagates along ray paths in condition of high-frequency asymptotic approximation. Calculation of ray paths and traveltimes is one of key steps in seismic simulation, inversion, migration, and imaging. Integrated triangular grids layout on wavefront with wavefront reconstruction ray tracing method, the thesis puts forward wavefront reconstruction ray tracing method based on triangular grids layout on wavefront, achieves accurate and fast calculation of ray paths and traveltimes. This method has stable and reasonable ray distribution, and overcomes problems caused by shadows in conventional ray tracing methods. The application of triangular grids layout on wavefront, keeps all the triangular grids stable, and makes the division of grids and interpolation of a new ray convenient. This technology reduces grids and memory, and then improves calculation efficiency. It enhances calculation accuracy by accurate and effective description and division on wavefront. Ray tracing traveltime table, which shares the character of 2-D or 3-D scatter data, has great amount of data points in process of seismic simulation, inversion, migration, and imaging. Therefore the traveltime table file will be frequently read, and the calculation efficiency is very low. Due to these reasons, reasonable traveltime table compression will be very necessary. This thesis proposes surface fitting and scattered data compression with B-spline function method, applies to 2-D and 3-D traveltime table compression. In order to compress 2-D (3-D) traveltime table, first we need construct a smallest rectangular (cuboidal) region with regular grids to cover all the traveltime data points, through the coordinate range of them in 2-D surface (3-D space). Then the value of finite regular grids, which are stored in memory, can be calculated using least square method. The traveltime table can be decompressed when necessary, according to liner interpolation method of 2-D (3-D) B-spline function. In the above calculation, the coefficient matrix is stored using sparse method and the liner system equations are solved using LU decomposition based on the multi-frontal method according to the sparse character of the least square method matrix. This method is practiced successfully in several models, and the cubic B-spline function can be the best basal function for surface fitting. It make the construction surface smooth, has stable and effective compression with high approximate accuracy using regular grids. In this way, through constructing reasonable regular grids to insure the calculation efficiency and accuracy of compression and surface fitting, we achieved the aim of traveltime table compression. This greatly improves calculation efficiency in process of seismic simulation, inversion, migration, and imaging.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The function of seismic data in prospecting and exploring oil and gas has exceeded ascertaining structural configuration early. In order to determine the advantageous target area more exactly, we need exactly image the subsurface media. So prestack migration imaging especially prestack depth migration has been used increasingly widely. Currently, seismic migration imaging methods are mainly based on primary energy and most of migration methods use one-way wave equation. Multiple will mask primary and sometimes will be regarded as primary and interferes with the imaging of primary, so multiple elimination is still a very important research subject. At present there are three different wavefield prediction and subtraction methods: wavefield extrapolation; feedback loop; and inverse-scattering series. I mainly do research on feedback loop method in this paper. Feedback loop method includs prediction and subtraction.Currently this method has some problems as follows. Firstly, feedback loop method requires the seismic data used to predict multiple is full wavefield data, but usually the original seismic data don’t meet this assumption, so seismic data must be regularized. Secondly, Multiple predicted through feedback loop method usually can’t match the real multiple in seismic data and they are different in amplitude, phase and arrrival time. So we need match the predicted multiple and that in seismic data through estimating filtering factors and subtract multiple from seismic data. It is the key for multiple elimination how to select a correct matching filtering method. There are many matching filtering methods and I put emphasis on Least-square adaptive matching filtering and L1-norm minimizing adaptive matching filtering methods. Least-square adaptive matching filtering method is computationally very fast, but it has two assumptions: the signal has minimum energy and is orthogonal to the noise. When seismic data don’t meet the two assumptions, this method can’t get good matching results and then can’t attenuate multiple correctly. L1-norm adaptive matching filtering methods can avoid these two assumptions and then get good matching results, but this method is computationally a little slow. The results of my research are as follows: 1. Proposed a method that interpolates seismic traces based on F-K migration and demigration. The main advantage of this method is that it can interpolate seismic traces in any offsets. It shows this method is valid through a simple model. 2. Comparing different Least-square adaptive matching filtering methods. The results show that equipose multi-channel adaptive matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and two field data. 3. Proposed equipose multi-channel L1-norm adaptive matching filtering method. Because L1-norm is robust to large amplitude differences, there are no assumption on the signal has minimum energy and orthogonality, this method can get better results of multiple elimination. 4. Research on multiple elimination in inverse data space. The method is a new multiple elimination method and it is different from those methods mentioned above.The advantages of this method is that it is simple in theory and no need for the adaptive subtraction and computationally very fast. The disadvantage of this method is that it is not stabilized in its solution. The results show that equipose multi-channel and equipose pesudo-multi-channel least-square matching filtering and equipose multi-channel and equipose pesudo-multi-channel L1-norm matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and many field data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The theory and approach of the broadband teleseismic body waveform inversion are expatiated in this paper, and the defining the crust structure's methods are developed. Based on the teleseismic P-wave data, the theoretic image of the P-wave radical component is calculated via the convolution of the teleseismic P-wave vertical component and the transform function, and thereby a P-wavefrom inversion method is built. The applied results show the approach effective, stable and its resolution high. The exact and reliable teleseismic P waveforms recorded by CDSN and IRIS and its geodynamics are utilized to obtain China and its vicinage lithospheric transfer functions, this region ithospheric structure is inverted through the inversion of reliable transfer functions, the new knowledge about the deep structure of China and its vicinage is obtained, and the reliable seismological evidence is provided to reveal the geodynamic evolution processes and set up the continental collisional theory. The major studies are as follows: Two important methods to study crustal and upper mantle structure -- body wave travel-time inversion and waveform modeling are reviewed systematically. Based on ray theory, travel-time inversion is characterized by simplicity, crustal and upper mantle velocity model can be obtained by using 1-D travel-time inversion preliminary, which introduces the reference model for studying focal location, focal mechanism, and fine structure of crustal and upper mantle. The large-scale lateral inhomogeneity of crustal and upper mantle can be obtained by three-dimensional t ravel-time seismic tomography. Based on elastic dynamics, through the fitting between theoretical seismogram and observed seismogram, waveform modeling can interpret the detail waveform and further uncover one-dimensional fine structure and lateral variation of crustal and upper mantle, especially the media characteristics of singular zones of ray. Whatever travel-time inversion and waveform modeling is supposed under certain approximate conditions, with respective advantages and disadvantages, and provide convincing structure information for elucidating physical and chemical features and geodynamic processes of crustal and upper mantle. Because the direct wave, surface wave, and refraction wave have lower resolution in investigating seismic velocity transitional zone, which is inadequate to study seismic discontinuities. On the contrary, both the converse and reflected wave, which sample the discontinuities directly, must be carefully picked up from seismogram to constrain the velocity transitional zones. Not only can the converse wave and reflected wave study the crustal structure, but also investigate the upper mantle discontinuities. There are a number of global and regional seismic discontinuities in the crustal and upper mantle, which plays a significant role in understanding physical and chemical properties and geodynamic processes of crustal and upper mantle. The broadband teleseismic P waveform inversion is studied particularly. The teleseismic P waveforms contain a lot of information related to source time function, near-source structure, propagation effect through the mantle, receiver structure, and instrument response, receiver function is isolated form teleseismic P waveform through the vector rotation of horizontal components into ray direction and the deconvolution of vertical component from the radial and tangential components of ground motion, the resulting time series is dominated by local receiver structure effect, and is hardly irrelevant to source and deep mantle effects. Receiver function is horizontal response, which eliminate multiple P wave reflection and retain direct wave and P-S converted waves, and is sensitive to the vertical variation of S wave velocity. Velocity structure beneath a seismic station has different response to radial and vertical component of an accident teleseismic P wave. To avoid the limits caused by a simplified assumption on the vertical response, the receiver function method is mended. In the frequency domain, the transfer function is showed by the ratio of radical response and vertical response of the media to P wave. In the time domain, the radial synthetic waveform can be obtained by the convolution of the transfer function with the vertical wave. In order to overcome the numerical instability, generalized reflection and transmission coefficient matrix method is applied to calculate the synthetic waveform so that all multi-reflection and phase conversion response can be included. A new inversion method, VFSA-LM method, is used in this study, which successfully combines very fast simulated annealing method (VFSA) with damped least square inversion method (LM). Synthetic waveform inversion test confirms its effectiveness and efficiency. Broadband teleseismic P waveform inversion is applied in lithospheric velocity study of China and its vicinage. According to the data of high quality CDSN and IRIS, we obtained an outline map showing the distribution of Asian continental crustal thickness. Based on these results gained, the features of distribution of the crustal thickness and outline of crustal structure under the Asian continent have been analyzed and studied. Finally, this paper advances the principal characteristics of the Asian continental crust. There exist four vast areas of relatively minor variations in the crustal thickness, namely, northern, eastern southern and central areas of Asian crust. As a byproduct, the earthquake location is discussed, Which is a basic issue in seismology. Because of the strong trade-off between the assumed initial time and focal depth and the nonlinear of the inversion problems, this issue is not settled at all. Aimed at the problem, a new earthquake location method named SAMS method is presented, In which, the objective function is the absolute value of the remnants of travel times together with the arrival times and use the Fast Simulated Annealing method is used to inverse. Applied in the Chi-Chi event relocation of Taiwan occurred on Sep 21, 2000, the results show that the SAMS method not only can reduce the effects of the trade-off between the initial time and focal depth, but can get better stability and resolving power. At the end of the paper, the inverse Q filtering method for compensating attenuation and frequency dispersion used in the seismic section of depth domain is discussed. According to the forward and inverse results of synthesized seismic records, our Q filtrating operator of the depth domain is consistent with the seismic laws in the absorbing media, which not only considers the effect of the media absorbing of the waves, but also fits the deformation laws, namely the frequency dispersion of the body wave. Two post stacked profiles about 60KM, a neritic area of China processed, the result shows that after the forward Q filtering of the depth domain, the wide of the wavelet of the middle and deep layers is compressed, the resolution and signal noise ratio are enhanced, and the primary sharp and energy distribution of the profile are retained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cross well seismic technique is a new type of geophysical method, which observes the seismic wave of the geologic body by placing both the source and receiver in the wells. By applying this method, it averted the absorption to high-frequency component of seismic signal caused by low weathering layers, thus, an extremely high-resolution seismic signal can be acquired. And extremely fine image of cross well formations, structure, and reservoir can be achieved as well. An integrated research is conducted to the high-frequency S-wave and P-wave data and some other data to determine the small faults, small structure and resolving the issues concerning the thin bed and reservoir's connectivity, fluid distribution, steam injection and fracture. This method connects the high-resolution surface seismic, logging and reservoir engineering. In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a research is conducted on cross well seismic technology in general and its important issues in cross well seismic technology in particular. A technological series of integrated field acquisition, data processing and interpretation and its integrated application research were developed and this new method can be applied to oilfield development and optimizing oilfield development scheme. The contents and results in this paper are as listed follows: An overview was given on the status quo and development of the cross well seismic method and problems concerning the cross well seismic technology and the difference in cross well seismic technology between China and international levels; And an analysis and comparison are given on foreign-made field data acquisition systems for cross-well seismic and pointed out the pros and cons of the field systems manufactured by these two foreign companies and this is highly valuable to import foreign-made cross well seismic field acquisition system for China. After analyses were conducted to the geometry design and field data for the cross well seismic method, a common wave field time-depth curve equation was derived and three types of pipe waves were discovered for the first time. Then, a research was conducted on the mechanism for its generation. Based on the wave field separation theory for cross well seismic method, we believe that different type of wave fields in different gather domain has different attributes characteristics, multiple methods (for instance, F-K filtering and median filtering) were applied in eliminating and suppressing the cross well disturbances and successfully separated the upgoing and downgoing waves and a satisfactory result has been achieved. In the area of wave field numerical simulation for cross well seismic method, a analysis was conducted on conventional ray tracing method and its shortcomings and proposed a minimum travel time ray tracing method based on Feraiat theory in this paper. This method is not only has high-speed calculation, but also with no rays enter into "dead end" or "blinded spot" after numerous iterations and it is become more adequate for complex velocity model. This is first time that the travel time interpolation has been brought into consideration, a dynamic ray tracing method with shortest possible path has been developed for the first arrivals of any complex mediums, such as transmission, diffraction and refraction, etc and eliminated the limitation for only traveling from one node to another node and increases the calculation accuracy for minimum travel time and ray tracing path and derives solution and corresponding edge conditions to the fourth-order differential sonic wave equation. The final step is to calculate cross well seismic synthetics for given source and receivers from multiple geological bodies. Thus, real cross-well seismic wave field can be recognized through scientific means and provides important foundation to guide the cross well seismic field geometry designing. A velocity tomographic inversion of the least square conjugated gradient method was developed for cross well seismic velocity tomopgraphic inversion and a modification has been made to object function of the old high frequency ray tracing method and put forward a thin bed oriented model for finite frequency velocity tomographic inversion method. As the theory model and results demonstrates that the method is simple and effective and is very important in seismic ray tomographic imaging for the complex geological body. Based on the characteristics of the cross well seismic algorithm, a processing flow for cross well seismic data processing has been built and optimized and applied to the production, a good section of velocity tomopgrphic inversion and cross well reflection imaging has been acquired. The cross well seismic data is acquired from the depth domain and how to interprets the depth domain data and retrieve the attributes is a brand new subject. After research was conducted on synthetics and trace integration from depth domain for the cross well seismic data interpretation, first of all, a research was conducted on logging constraint wave impedance of cross well seismic data and initially set up cross well seismic data interpretation flows. After it applied and interpreted to the cross well seismic data and a good geological results has been achieved in velocity tomographic inversion and reflection depth imaging and a lot of difficult problems for oilfield development has been resolved. This powerful, new method is good for oilfield development scheme optimization and increasing EOR. Based on conventional reservoir geological model building from logging data, a new method is also discussed on constraining the accuracy of reservoir geological model by applying the high resolution cross well seismic data and it has applied to Fan 124 project and a good results has been achieved which it presents a bight future for the cross well seismic technology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At first, the article has an introduction of the basic theory of magnetotelluric and the essential methods of data acquisition and preprocessing. After that, the article introduces the methods together with their predominance of computing transfering function such as the Least-square method, the Remote-Reference method and the Robust method. The article also describe the cause and influence of static shift, and has a summarize of how to correct the static shift efficiently, then emphasizes on the theories of the popular impedance tensor decomposition methods as Phase-sensitivity method, Groom and Bailey method, General tensor-analyzed method and Mohr circle-analyzed method. The kernal step of magnetotelluric data-processing is inversion, which is also an important content of the article. Firstly, the article introduces the basic theories of both the popular one-dimensional inversion methods as Automod, Occam, Rhoplus, Bostick and Ipi2win and the two-dimensional inversion methods as Occam, Rebocc, Abie and Nlcg. Then, the article is focused on parallel-analysis of the applying advantage of each inversion method with practical models, and obtains meaningful conclusion. Visual program design of magnetotelluric data-processing is another kernal part of the article. The bypast visual program design of magnetotelluric data-processing is not satisfied and systemic, for example, the data-processing method is single, the data-management is not systemic, the data format is not uniform. The article bases the visual program design of magnetotelluric data-processing upon practicability, structurality, variety and extensibility, and adopts database technology and mixed language program design method; finally, a magnetotelluric data management and processing system that integrates database saving and fetching system, data-processing system and graphical displaying system. Finally, the article comes onto the magnetotelluric application.takeing the Tulargen Cu-Ni mining area in Xingjiang as the practical example, using the data-processing methods introduced before, the article has a detailed introduction of magnetotelluric data interpretation procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For applications involving the control of moving vehicles, the recovery of relative motion between a camera and its environment is of high utility. This thesis describes the design and testing of a real-time analog VLSI chip which estimates the focus of expansion (FOE) from measured time-varying images. Our approach assumes a camera moving through a fixed world with translational velocity; the FOE is the projection of the translation vector onto the image plane. This location is the point towards which the camera is moving, and other points appear to be expanding outward from. By way of the camera imaging parameters, the location of the FOE gives the direction of 3-D translation. The algorithm we use for estimating the FOE minimizes the sum of squares of the differences at every pixel between the observed time variation of brightness and the predicted variation given the assumed position of the FOE. This minimization is not straightforward, because the relationship between the brightness derivatives depends on the unknown distance to the surface being imaged. However, image points where brightness is instantaneously constant play a critical role. Ideally, the FOE would be at the intersection of the tangents to the iso-brightness contours at these "stationary" points. In practice, brightness derivatives are hard to estimate accurately given that the image is quite noisy. Reliable results can nevertheless be obtained if the image contains many stationary points and the point is found that minimizes the sum of squares of the perpendicular distances from the tangents at the stationary points. The FOE chip calculates the gradient of this least-squares minimization sum, and the estimation is performed by closing a feedback loop around it. The chip has been implemented using an embedded CCD imager for image acquisition and a row-parallel processing scheme. A 64 x 64 version was fabricated in a 2um CCD/ BiCMOS process through MOSIS with a design goal of 200 mW of on-chip power, a top frame rate of 1000 frames/second, and a basic accuracy of 5%. A complete experimental system which estimates the FOE in real time using real motion and image scenes is demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Shock wave lithotripsy is the preferred treatment modality for kidney stones in the United States. Despite clinical use for over twenty-five years, the mechanisms of stone fragmentation are still under debate. A piezoelectric array was employed to examine the effect of waveform shape and pressure distribution on stone fragmentation in lithotripsy. The array consisted of 170 elements placed on the inner surface of a 15 cm-radius spherical cap. Each element was driven independently using a 170 individual pulsers, each capable of generating 1.2 kV. The acoustic field was characterized using a fiber optic probe hydrophone with a bandwidth of 30 MHz and a spatial resolution of 100 μm. When all elements were driven simultaneously, the focal waveform was a shock wave with peak pressures p+ =65±3MPa and p−=−16±2MPa and the −6 dB focal region was 13 mm long and 2 mm wide. The delay for each element was the only control parameter for customizing the acoustic field and waveform shape, which was done with the aim of investigating the hypothesized mechanisms of stone fragmentation such as spallation, shear, squeezing, and cavitation. The acoustic field customization was achieved by employing the angular spectrum approach for modeling the forward wave propagation and regression of least square errors to determine the optimal set of delays. Results from the acoustic field customization routine and its implications on stone fragmentation will be discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is then achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2-D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The warping templates are computed at the first frame of the sequence. Illumination templates are precomputed off-line over a training set of face images collected under varying lighting conditions. Experiments in tracking are reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The standard early markers for identifying and grading HIE severity, are not sufficient to ensure all children who would benefit from treatment are identified in a timely fashion. The aim of this thesis was to explore potential early biomarkers of HIE. Methods: To achieve this a cohort of infants with perinatal depression was prospectively recruited. All infants had cord blood samples drawn and biobanked, and were assessed with standardised neurological examination, and early continuous multi-channel EEG. Cord samples from a control cohort of healthy infants were used for comparison. Biomarkers studied included; multiple inflammatory proteins using multiplex assay; the metabolomics profile using LC/MS; and the miRNA profile using microarray. Results: Eighty five infants with perinatal depression were recruited. Analysis of inflammatory proteins consisted of exploratory analysis of 37 analytes conducted in a sub-population, followed by validation of all significantly altered analytes in the remaining population. IL-6 and IL-6 differed significantly in infants with a moderate/severely abnormal vs. a normal-mildly abnormal EEG in both cohorts (Exploratory: p=0.016, p=0.005: Validation: p=0.024, p=0.039; respectively). Metabolomic analysis demonstrated a perturbation in 29 metabolites. A Cross- validated Partial Least Square Discriminant Analysis model was developed, which accurately predicted HIE with an AUC of 0.92 (95% CI: 0.84-0.97). Analysis of the miRNA profile found 70 miRNA significantly altered between moderate/severely encephalopathic infants and controls. miRNA target prediction databases identified potential targets for the altered miRNA in pathways involved in cellular metabolism, cell cycle and apoptosis, cell signaling, and the inflammatory cascade. Conclusion: This thesis has demonstrated that the recruitment of a large cohortof asphyxiated infants, with cord blood carefully biobanked, and detailed early neurophysiological and clinical assessment recorded, is feasible. Additionally the results described, provide potential alternate and novel blood based biomarkers for the identification and assessment of HIE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: The role of PM10 in the development of allergic diseases remains controversial among epidemiological studies, partly due to the inability to control for spatial variations in large-scale risk factors. This study aims to investigate spatial correspondence between the level of PM10 and allergic diseases at the sub-district level in Seoul, Korea, in order to evaluate whether the impact of PM10 is observable and spatially varies across the subdistricts. METHODS: PM10 measurements at 25 monitoring stations in the city were interpolated to 424 sub-districts where annual inpatient and outpatient count data for 3 types of allergic diseases (atopic dermatitis, asthma, and allergic rhinitis) were collected. We estimated multiple ordinary least square regression models to examine the association of the PM10 level with each of the allergic diseases, controlling for various sub-district level covariates. Geographically weighted regression (GWR) models were conducted to evaluate how the impact of PM10 varies across the sub-districts. RESULTS: PM10 was found to be a significant predictor of atopic dermatitis patient count (P<0.01), with greater association when spatially interpolated at the sub-district level. No significant effect of PM10 was observed on allergic rhinitis and asthma when socioeconomic factors were controlled for. GWR models revealed spatial variation of PM10 effects on atopic dermatitis across the sub-districts in Seoul. The relationship of PM10 levels to atopic dermatitis patient counts is found to be significant only in the Gangbuk region (P<0.01), along with other covariates including average land value, poverty rate, level of education and apartment rate (P<0.01). CONCLUSIONS: Our findings imply that PM10 effects on allergic diseases might not be consistent throughout Seoul. GIS-based spatial modeling techniques could play a role in evaluating spatial variation of air pollution impacts on allergic diseases at the sub-district level, which could provide valuable guidelines for environmental and public health policymakers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial neural network (ANN) models for water loss (WL) and solid gain (SG) were evaluated as potential alternative to multiple linear regression (MLR) for osmotic dehydration of apple, banana and potato. The radial basis function (RBF) network with a Gaussian function was used in this study. The RBF employed the orthogonal least square learning method. When predictions of experimental data from MLR and ANN were compared, an agreement was found for ANN models than MLR models for SG than WL. The regression coefficient for determination (R2) for SG in MLR models was 0.31, and for ANN was 0.91. The R2 in MLR for WL was 0.89, whereas ANN was 0.84.Osmotic dehydration experiments found that the amount of WL and SG occurred in the following descending order: Golden Delicious apple > Cox apple > potato > banana. The effect of temperature and concentration of osmotic solution on WL and SG of the plant materials followed a descending order as: 55 > 40 > 32.2C and 70 > 60 > 50 > 40%, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Satellite altimetry has revolutionized our understanding of ocean dynamics thanks to frequent sampling and global coverage. Nevertheless, coastal data have been flagged as unreliable due to land and calm water interference in the altimeter and radiometer footprint and uncertainty in the modelling of high-frequency tidal and atmospheric forcing. Our study addresses the first issue, i.e. altimeter footprint contamination, via retracking, presenting ALES, the Adaptive Leading Edge Subwaveform retracker. ALES is potentially applicable to all the pulse-limited altimetry missions and its aim is to retrack both open ocean and coastal data with the same accuracy using just one algorithm. ALES selects part of each returned echo and models it with a classic ”open ocean” Brown functional form, by means of least square estimation whose convergence is found through the Nelder-Mead nonlinear optimization technique. By avoiding echoes from bright targets along the trailing edge, it is capable of retrieving more coastal waveforms than the standard processing. By adapting the width of the estimation window according to the significant wave height, it aims at maintaining the accuracy of the standard processing in both the open ocean and the coastal strip. This innovative retracker is validated against tide gauges in the Adriatic Sea and in the Greater Agulhas System for three different missions: Envisat, Jason-1 and Jason-2. Considerations of noise and biases provide a further verification of the strategy. The results show that ALES is able to provide more reliable 20-Hz data for all three missions in areas where even 1-Hz averages are flagged as unreliable in standard products. Application of the ALES retracker led to roughly a half of the analysed tracks showing a marked improvement in correlation with the tide gauge records, with the rms difference being reduced by a factor of 1.5 for Jason-1 and Jason-2 and over 4 for Envisat in the Adriatic Sea (at the closest point to the tide gauge).