406 resultados para Subtraction


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The presence of liquid fuel inside the engine cylinder is believed to be a strong contributor to the high levels of hydrocarbon emissions from spark ignition (SI) engines during the warm-up period. Quantifying and determining the fate of the liquid fuel that enters the cylinder is the first step in understanding the process of emissions formation. This work uses planar laser induced fluorescence (PLIF) to visualize the liquid fuel present in the cylinder. The fluorescing compounds in indolene, and mixtures of iso-octane with dopants of different boiling points (acetone and 3-pentanone) were used to trace the behavior of different volatility components. Images were taken of three different planes through the engine intersecting the intake valve region. A closed valve fuel injection strategy was used, as this is the strategy most commonly used in practice. Background subtraction and masking were both performed to reduce the effect of any spurious fluorescence. The images were analyzed on both a time and crank angle (CA) basis, showing the time of maximum liquid fuel present in the cylinder and the effect of engine events on the inflow of liquid fuel. The results show details of the liquid fuel distribution as it enters the engine as a function of crankangle degree, volatility and location in the cylinder. A. semi-quantitative analysis based on the integration of the image intensities provides additional information on the temporal distribution of the liquid fuel flow. © 1998 Society of Automotive Engineers, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the 'Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in structured light 3D reconstruction. Evidence is presented showing its superior robustness, accuracy, and efficiency in comparison to other commonly used detectors, including Harris & Stephens and SUSAN, both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects. © 2013 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new gene with WD domains is cloned and characterized according to its differential transcription and expression between previtellogenic oocytes (phase I oocytes) and fully-grown oocytes (phase V oocytes) from natural gynogenetic silver crucian carp (Carassius auratus gibelio) by using the combinative methods of suppressive subtraction hybridization, SMART cDNA synthesis and RACE-PCR. The full-length cDNA is 1870 bp. Its 5 ' untranslated region is 210 bp, followed by an open reading frame of 990 bp, which has the typical vertebrate initiator codon of ANNATG. The open reading frame encodes a protein with 329 amino acids. It has 670 bp of 3 ' untranslated region and an AATAAA polyadenylation signal. Because it has 92% homology to STRAP (serine-threonine kinase receptor-associated protein), a recently reported gene, we named it FSTRAP (fish STRAP). Virtual Northern blotting indicated that the FSTRAP was transcribed in fully-grown oocytes (phase V oocytes), but not in previtellogenic oocytes (phase I oocytes). RT-PCR analysis showed that FSTRAP was transcribed in brain, heart, kidney, muscle, ovary, spleen and testis, but not in liver. And its mRNA could be detected in the oocytes from phase II to phase V. Western blotting also showed that FSTRAP protein could be detected in brain, heart, kidney, muscle, ovary, spleen and testis except liver. Results of Western blotting on various oocytes were also similar to the RT-PCR data. FSTRAP protein was not expressed in the previtellogenic oocytes. Its expression initiated from phase II oocytes after vitellogenesis, and was consistent with the mRNA transcription.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a new method for extracting the intrinsic response of a laser diode from S-parameters measured using a calibrated vector network analyzer. The experimental results obtained using the new method are compared with those obtained using the optical modulation method and the frequency response subtraction method. Good agreement has been obtained, confirming the new method validity and accuracy. The new method has the advantages of obtaining the intrinsic characteristics of a laser diode with conventional measurements using a network analyzer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A synthesized photochromic compound-pyrrylfulgide-is prepared as a thin film doped in a polymethylmethacrylate (PMMA) matrix. Under irradiation by UV light, the film converts from the bleached state into a colored state that has a maximum absorption at 635 nm and is thermally stable at room temperature. When the colored state is irradiated by a linearly polarized 650 nm laser, the film returns to the bleached state; photoinduced anisotropy is produced during this process. Application of optical image processing methods using the photoinduced anisotropy of the pyrrylfulgide/PMMA film is described. Examples in non-Fourier optical image processing, such as contrast reversal and image subtraction and summation, as well as in Fourier optical image processing, such as low-pass filtering and edge enhancement, are presented. (c) 2006 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the practical seismic profile multiple reflections tend to impede the task of even the experienced interpreter in deducing information from the reflection data. Surface multiples are usually much stronger, more broadband, and more of a problem than internal multiples because the reflection coefficient at the water surface is much larger than the reflection coefficients found in the subsurface. For this reason most attempts to remove multiples from marine data focus on surface multiples, as will I. A surface-related multiple attenuation method can be formulated as an iterative procedure. In this essay a fully data-driven approach which is called MPI —multiple prediction through inversion (Wang, 2003) is applied to a real marine seismic data example. This is a pretty promising scheme for predicting a relative accurate multiple model by updating the multiple model iteratively, as we usually do in a linearized inverse problem. The prominent characteristic of MPI method lie in that it eliminate the need for an explicit surface operator which means it can model the multiple wavefield without any knowledge of surface and subsurface structures even a source signature. Another key feature of this scheme is that it can predict multiples not only in time but also in phase and in amplitude domain. According to the real data experiments it is shown that this scheme for multiple prediction can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration. In the other core step which is multiple subtraction we use an expanded multi-channel matching filter to fulfil this aim. Compared to a normal multichannel matching filter where an original seismic trace is matched by a group of multiple-model traces, in EMCM filter a seismic trace is matched by not only a group of the ordinary multiple-model traces but also their adjoints generated mathematically. The adjoints of a multiple-model trace include its first derivative, its Hilbert transform and the derivative of the Hilbert transform. The third chapter of the thesis is the application for the real data using the previous methods we put forward from which we can obviously find the effectivity and prospect of the value in use. For this specific case I have done three group experiments to test the effectiveness of MPI method, compare different subtraction results with fixed filter length but different window length, invest the influence of the initial subtraction result for MPI method. In terms of the real data application, we do fine that the initial demultiple estimate take on a great deal of influence for the MPI method. Then two approaches are introduced to refine the intial demultiple estimate which are first arrival and masking filter respectively. In the last part some conclusions are drawn in terms of the previous results I have got.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scale matching method means adjusting information with different scale to the same level. This thesis focuses on scale unification of information with different frequency bandwidth. Well-seismic cooperate inversion is an important component of reservoir geophysics; multiple prediction & subtraction is a development of multiple attenuation in recent years. The common ground of these two methods is that they both related to different frequency bandwidth unification. Well log、cross-hole seismic、VSP、3D seismic and geological information have different spatial resolution, we can decrease multi-solution of reservoir inversion and enhance the vertical and lateral resolution of the geological object by integrate those information together; Compare the predicted multiple generated by SRME with the real multiple, we find the predicted multiple convolutes at least one wavelet more, which brings frequency bandwidth difference between them. So the subtraction method also relates to multi-scale information unification. This thesis gives a method of well constrained seismic high resolution processing basing on auto gain control modulation. It uses base function method which utilizes original well-seismic match result as initial condition and processed seismic trace as initial model to extrapolate the high frequency information of the well logs to the seismic profiles. In this way we can broaden the bandwidth of the seismic and make the high frequency gain geological meaning. In this thesis we introduce the revised base function method to adaptive subtraction and verify the validity of the method using models. Key words: high frequency reconstruction, scale matching, base function, multiple, SRME prediction & subtraction

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The function of seismic data in prospecting and exploring oil and gas has exceeded ascertaining structural configuration early. In order to determine the advantageous target area more exactly, we need exactly image the subsurface media. So prestack migration imaging especially prestack depth migration has been used increasingly widely. Currently, seismic migration imaging methods are mainly based on primary energy and most of migration methods use one-way wave equation. Multiple will mask primary and sometimes will be regarded as primary and interferes with the imaging of primary, so multiple elimination is still a very important research subject. At present there are three different wavefield prediction and subtraction methods: wavefield extrapolation; feedback loop; and inverse-scattering series. I mainly do research on feedback loop method in this paper. Feedback loop method includs prediction and subtraction.Currently this method has some problems as follows. Firstly, feedback loop method requires the seismic data used to predict multiple is full wavefield data, but usually the original seismic data don’t meet this assumption, so seismic data must be regularized. Secondly, Multiple predicted through feedback loop method usually can’t match the real multiple in seismic data and they are different in amplitude, phase and arrrival time. So we need match the predicted multiple and that in seismic data through estimating filtering factors and subtract multiple from seismic data. It is the key for multiple elimination how to select a correct matching filtering method. There are many matching filtering methods and I put emphasis on Least-square adaptive matching filtering and L1-norm minimizing adaptive matching filtering methods. Least-square adaptive matching filtering method is computationally very fast, but it has two assumptions: the signal has minimum energy and is orthogonal to the noise. When seismic data don’t meet the two assumptions, this method can’t get good matching results and then can’t attenuate multiple correctly. L1-norm adaptive matching filtering methods can avoid these two assumptions and then get good matching results, but this method is computationally a little slow. The results of my research are as follows: 1. Proposed a method that interpolates seismic traces based on F-K migration and demigration. The main advantage of this method is that it can interpolate seismic traces in any offsets. It shows this method is valid through a simple model. 2. Comparing different Least-square adaptive matching filtering methods. The results show that equipose multi-channel adaptive matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and two field data. 3. Proposed equipose multi-channel L1-norm adaptive matching filtering method. Because L1-norm is robust to large amplitude differences, there are no assumption on the signal has minimum energy and orthogonality, this method can get better results of multiple elimination. 4. Research on multiple elimination in inverse data space. The method is a new multiple elimination method and it is different from those methods mentioned above.The advantages of this method is that it is simple in theory and no need for the adaptive subtraction and computationally very fast. The disadvantage of this method is that it is not stabilized in its solution. The results show that equipose multi-channel and equipose pesudo-multi-channel least-square matching filtering and equipose multi-channel and equipose pesudo-multi-channel L1-norm matching filtering methods can get better results of multiple elimination than other matcing methods through three model data and many field data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mechanisms underlying cognitive psychology and cerebral physiological of mental arithmetic with increasing are were studied by using behavioral methods and functional magnetic resonance imaging (fMRI). I. Studies on mechanism underlying cognitive psychology of mental arithmetic with increasing age These studies were accomplished in 172 normal subjects ranging from 20 to 79 years of age with above 12 years of education (Mean = 1.51, SD = 1.5). Five mental arithmetic tasks, "1000-1", "1000-3", "1000-7", "1000-13", "1000-17", were designed with a serial calculation in which subjects sequentially subtracted the same prime number (1, 3, 7, 13, 17) from another number 1000. The variables studied were mental arithmetic, age, working memory, and sensory-motor speed, and four studies were conducted: (1) Aging process of mental arithmetic with different difficulties, (2) mechanism of aging of mental arithmetic processing. (3) effects of working memory and sensory-motor speed on aging process of mental arithmetic, (4) model of cognitive aging of mental arithmetic, with statistical methods such as MANOVA, hierarchical multiple regression, stepwise regression analysis, structural equation modelling (SEM). The results were indicated as following: Study 1: There was an obvious interaction between age and mental arithmetic, in which reaction time (RT) increased with advancing age and more difficult mental arithmetic, and mental arithmetic efficiency (the ratio of accuracy to RT) deceased with advancing age and more difficult mental arithmetic; Mental arithmetic efficiency with different difficulties decreased in power function: Study 2: There were two mediators (latent variables) in aging process of mental arithmetic, and age had an effect on mental arithmetic with different difficulties through the two mediators; Study 3: There were obvious interactions between age and working memory, working memory and mental arithmetic; Working memory and sensory-motor speed had effects on aging process of mental arithmetic, in which the effect of working memory on aging process of mental arithmetic was about 30-50%, and the effect of sensory-motor speed on aging process of mental arithmetic was above 35%. Study 4: Age, working memory, and sensory-motor speed had effects on two latent variables (factor 1 and factor 2), then had effects on mental arithmetic with different difficulties through factor 1 which was relative to memory component, and factor 2 which relative to speed component and had an effect on factor 1 significantly. II. Functional magnetic resonance imaging study on metal arithmetic with increasing age This study was accomplished in 14 normal right-handed subjects ranging from 20 to 29 (7 subjects) and 60 to 69 (7 subjects) years of age by using functional magnetic resonance imaging apparatus, a superconductive Signa Horizon 1.5T MRI system. Two mental arithmetic tasks, "1000-3" and "1000-17", were designed with a serial calculation in which subjects sequentially subtracted the same prime number (3 or 17) from another number 1000 silently, and controlling task, "1000-0", in which subjects continually rehearsed number 1000 silently, was regarded as baseline, based on current "baseline-task" OFF-ON subtraction pattern. Original data collected by fMRI apparatus, were analyzed off-line in SUN SPARC working station by using current STIMULATE software. The analytical steps were composed of within-subject analysis, in which brain activated images about mental arithmetic with two difficulties were obtained by using t-test, and between-subject analysis, in which features of brain activation about mental arithmetic with two difficulties, the relationship between left and right hemisphere during mental arithmetic, and age differences of brain activation in young and elderly adults were examined by using non-parameter Wilcoxon test. The results were as following:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This memo describes the initial results of a project to create a self-supervised algorithm for learning object segmentation from video data. Developmental psychology and computational experience have demonstrated that the motion segmentation of objects is a simpler, more primitive process than the detection of object boundaries by static image cues. Therefore, motion information provides a plausible supervision signal for learning the static boundary detection task and for evaluating performance on a test set. A video camera and previously developed background subtraction algorithms can automatically produce a large database of motion-segmented images for minimal cost. The purpose of this work is to use the information in such a database to learn how to detect the object boundaries in novel images using static information, such as color, texture, and shape. This work was funded in part by the Office of Naval Research contract #N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of 11/6/98, and in part by a National Science Foundation Graduate Student Fellowship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A procedure that uses fuzzy ARTMAP and K-Nearest Neighbor (K-NN) categorizers to evaluate intrinsic and extrinsic speaker normalization methods is described. Each classifier is trained on preprocessed, or normalized, vowel tokens from about 30% of the speakers of the Peterson-Barney database, then tested on data from the remaining speakers. Intrinsic normalization methods included one nonscaled, four psychophysical scales (bark, bark with end-correction, mel, ERB), and three log scales, each tested on four different combinations of the fundamental (Fo) and the formants (F1 , F2, F3). For each scale and frequency combination, four extrinsic speaker adaptation schemes were tested: centroid subtraction across all frequencies (CS), centroid subtraction for each frequency (CSi), linear scale (LS), and linear transformation (LT). A total of 32 intrinsic and 128 extrinsic methods were thus compared. Fuzzy ARTMAP and K-NN showed similar trends, with K-NN performing somewhat better and fuzzy ARTMAP requiring about 1/10 as much memory. The optimal intrinsic normalization method was bark scale, or bark with end-correction, using the differences between all frequencies (Diff All). The order of performance for the extrinsic methods was LT, CSi, LS, and CS, with fuzzy AHTMAP performing best using bark scale with Diff All; and K-NN choosing psychophysical measures for all except CSi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intrinsic and extrinsic speaker normalization methods are systematically compared using a neural network (fuzzy ARTMAP) and L1 and L2 K-Nearest Neighbor (K-NN) categorizers trained and tested on disjoint sets of speakers of the Peterson-Barney vowel database. Intrinsic methods include one nonscaled, four psychophysical scales (bark, bark with endcorrection, mel, ERB), and three log scales, each tested on four combinations of F0 , F1, F2, F3. Extrinsic methods include four speaker adaptation schemes, each combined with the 32 intrinsic methods: centroid subtraction across all frequencies (CS), centroid subtraction for each frequency (CSi), linear scale (LS), and linear transformation (LT). ARTMAP and KNN show similar trends, with K-NN performing better, but requiring about ten times as much memory. The optimal intrinsic normalization method is bark scale, or bark with endcorrection, using the differences between all frequencies (Diff All). The order of performance for the extrinsic methods is LT, CSi, LS, and CS, with fuzzy ARTMAP performing best using bark scale with Diff All; and K-NN choosing psychophysical measures for all except CSi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated nanowire electrodes that permit direct, sensitive and rapid electrochemical based detection of chemical and biological species are a powerful emerging class of sensor devices. As critical dimensions of the electrodes enter the nanoscale, radial analyte diffusion profiles to the electrode dominate with a corresponding enhancement in mass transport, steady-state sigmoidal voltammograms, low depletion of target molecules and faster analysis. To optimise these sensors it is necessary to fully understand the factors that influence performance limits including: electrode geometry, electrode dimensions, electrode separation distances (within nanowire arrays) and diffusional mass transport. Therefore, in this thesis, theoretical simulations of analyte diffusion occurring at a variety of electrode designs were undertaken using Comsol Multiphysics®. Sensor devices were fabricated and corresponding experiments were performed to challenge simulation results. Two approaches for the fabrication and integration of metal nanowire electrodes are presented: Template Electrodeposition and Electron-Beam Lithography. These approaches allow for the fabrication of nanowires which may be subsequently integrated at silicon chip substrates to form fully functional electrochemical devices. Simulated and experimental results were found to be in excellent agreement validating the simulation model. The electrochemical characteristics exhibited by nanowire electrodes fabricated by electronbeam lithography were directly compared against electrochemical performance of a commercial ultra-microdisc electrode. Steady-state cyclic voltammograms in ferrocenemonocarboxylic acid at single ultra-microdisc electrodes were observed at low to medium scan rates (≤ 500 mV.s-1). At nanowires, steady-state responses were observed at ultra-high scan rates (up to 50,000 mV.s-1), thus allowing for much faster analysis (20 ms). Approaches for elucidating faradaic signal without the requirement for background subtraction were also developed. Furthermore, diffusional process occurring at arrays with increasing inter-electrode distance and increasing number of nanowires were explored. Diffusion profiles existing at nanowire arrays were simulated with Comsol Multiphysics®. A range of scan rates were modelled, and experiments were undertaken at 5,000 mV.s-1 since this allows rapid data capture required for, e.g., biomedical, environmental and pharmaceutical diagnostic applications.