891 resultados para Fourier coefficients
Resumo:
Seismic exploration is the main method of seeking oil and gas. With the development of seismic exploration, the target becomes more and more complex, which leads to a higher demand for the accuracy and efficiency in seismic exploration. Fourier finite-difference (FFD) method is one of the most valuable methods in complex structure exploration, which has obtained good effect. However, in complex media with wider angles, the effect of FFD method is not satisfactory. Based on the FFD operator, we extend the two coefficients to be optimized to four coefficients, then optimize them globally using simulated annealing algorithm. Our optimization method select the solution of one-way wave equation as the objective function. Except the velocity contrast, we consider the effects of both frequency and depth interval. The proposed method can improve the angle of FFD method without additional computation time, which can reach 75° in complex media with large lateral velocity contrasts and wider propagation angles. In this thesis, combinating the FFD method and alternative-direction-implicit plus interpolation(ADIPI) method, we obtain 3D FFD with higher accuracy. On the premise of keeping the efficiency of the FFD method, this method not only removes the azimuthal anisotropy but also optimizes the FFD mehod, which is helpful to 3D seismic exploration. We use the multi-parameter global optimization method to optimize the high order term of FFD method. Using lower-order equation to obtain the approximation effect of higher-order equation, not only decreases the computational cost result from higher-order term, but also obviously improves the accuracy of FFD method. We compare the FFD, SAFFD(multi-parameter simulated annealing globally optimized FFD), PFFD, phase-shift method(PS), globally optimized FFD (GOFFD), and higher-order term optimized FFD method. The theoretical analyses and the impulse responses demonstrate that higher-order term optimized FFD method significantly extends the accurate propagation angle of the FFD method, which is useful to complex media with wider propagation angles.
Resumo:
A new ''Ritz'' program has been used for revising and expanding the assignment of the Fourier transform infrared and far-infrared spectrum of CH3OH. This program evaluates the energy levels involved in the assigned transitions by the Rydberg-Ritz combination principle and can tackle such perturbations as Fermi-type resonances or Coriolis interactions. Up to now this program has evaluated the energies of 2768 levels belonging to A-type symmetry and 4133 levels belonging to E-type symmetry of CH3OH. Here we present the assignment of almost 9600 lines between 350 and 950 cm(-1). The Taylor expansion coefficients for evaluating the energies of the levels involved in the transitions are also given. All of the lines presented in this paper correspond to transitions involving torsionally excited levels within the ground vibrational state. (C) 1995 Academic Press, Inc.
Resumo:
Pós-graduação em Ciência e Tecnologia de Materiais - FC
Resumo:
The calibration results (the transfer function) of an anemometer equipped with several cup rotors were analyzed and correlated with the aerodynamic forces measured on the isolated cups in a wind tunnel. The correlation was based on a Fourier analysis of the normal-to-the-cup aerodynamic force. Three different cup shapes were studied: typical conical cups, elliptical cups and porous cups (conical-truncated shape). Results indicated a good correlation between the anemometer factor, K, and the ratio between the first two coefficients in the Fourier series decomposition of the normal-to-the-cup aerodynamic force
Resumo:
MSC 2010: 42A32; 42A20
Resumo:
Finite Difference Time Domain (FDTD) Method and software are applied to obtain diffraction waves from modulated Gaussian plane wave illumination for right angle wedges and Fast Fourier Transform (FFT) is used to get diffraction coefficients in a wideband in the illuminated lit region. Theta and Phi polarization in 3-dimensional, TM and TE polarization in 2-dimensional cases are considered respectively for soft and hard diffraction coefficients. Results using FDTD method of perfect electric conductor (PEC) wedge are compared with asymptotic expressions from Uniform Theory of Diffraction (UTD). Extend the PEC wedges to some homogenous conducting and dielectric building materials for diffraction coefficients that are not available analytically in practical conditions. ^
Resumo:
The effectiveness of higher-order spectral (HOS) phase features in speaker recognition is investigated by comparison with Mel Cepstral features on the same speech data. HOS phase features retain phase information from the Fourier spectrum unlikeMel–frequency Cepstral coefficients (MFCC). Gaussian mixture models are constructed from Mel– Cepstral features and HOS features, respectively, for the same data from various speakers in the Switchboard telephone Speech Corpus. Feature clusters, model parameters and classification performance are analyzed. HOS phase features on their own provide a correct identification rate of about 97% on the chosen subset of the corpus. This is the same level of accuracy as provided by MFCCs. Cluster plots and model parameters are compared to show that HOS phase features can provide complementary information to better discriminate between speakers.
Resumo:
Aims – To develop local contemporary coefficients for the Trauma Injury Severity Score in New Zealand, TRISS(NZ), and to evaluate their performance at predicting survival against the original TRISS coefficients. Methods – Retrospective cohort study of adults who sustained a serious traumatic injury, and who survived until presentation at Auckland City, Middlemore, Waikato, or North Shore Hospitals between 2002 and 2006. Coefficients were estimated using ordinary and multilevel mixed-effects logistic regression models. Results – 1735 eligible patients were identified, 1672 (96%) injured from a blunt mechanism and 63 (4%) from a penetrating mechanism. For blunt mechanism trauma, 1250 (75%) were male and average age was 38 years (range: 15-94 years). TRISS information was available for 1565 patients of whom 204 (13%) died. Area under the Receiver Operating Characteristic (ROC) curves was 0.901 (95%CI: 0.879-0.923) for the TRISS(NZ) model and 0.890 (95% CI: 0.866-0.913) for TRISS (P<0.001). Insufficient data were available to determine coefficients for penetrating mechanism TRISS(NZ) models. Conclusions – Both TRISS models accurately predicted survival for blunt mechanism trauma. However, TRISS(NZ) coefficients were statistically superior to TRISS coefficients. A strong case exists for replacing TRISS coefficients in the New Zealand benchmarking software with these updated TRISS(NZ) estimates.
Resumo:
Engineering assets are often complex systems. In a complex system, components often have failure interactions which lead to interactive failures. A system with interactive failures may lead to an increased failure probability. Hence, one may have to take the interactive failures into account when designing and maintaining complex engineering systems. To address this issue, Sun et al have developed an analytical model for the interactive failures. In this model, the degree of interaction between two components is represented by interactive coefficients. To use this model for failure analysis, the related interactive coefficients must be estimated. However, methods for estimating the interactive coefficients have not been reported. To fill this gap, this paper presents five methods to estimate the interactive coefficients including probabilistic method; failure data based analysis method; laboratory experimental method; failure interaction mechanism based method; and expert estimation method. Examples are given to demonstrate the applications of the proposed methods. Comparisons among these methods are also presented.
Resumo:
Background: Currently used Trauma and Injury Severity Score (TRISS) coefficients, which measure probability of survival (Ps), were derived from the Major Trauma Outcome Study (MTOS) in 1995 and are now unlikely to be optimal. This study aims to estimate new TRISS coefficients using a contemporary database of injured patients presenting to emergency departments in the United States; and to compare these against the MTOS coefficients.---------- Methods: Data were obtained from the National Trauma Data Bank (NTDB) and the NTDB National Sample Project (NSP). TRISS coefficients were estimated using logistic regression. Separate coefficients were derived from complete case and multistage multiple imputation analyses for each NTDB and NSP dataset. Associated Ps over Injury Severity Score values were graphed and compared by age (adult ≥ 15 years; pediatric < 15 years) and injury mechanism (blunt; penetrating) groups. Area under the Receiver Operating Characteristic curves was used to assess coefficients’ predictive performance.---------- Results: Overall 1,072,033 NTDB and 1,278,563 weighted NSP injury events were included, compared with 23,177 used in the original MTOS analyses. Large differences were seen between results from complete case and imputed analyses. For blunt mechanism and adult penetrating mechanism injuries, there were similarities between coefficients estimated on imputed samples, and marked divergences between associated Ps estimated and those from the MTOS. However, negligible differences existed between area under the receiver operating characteristic curves estimates because the overwhelming majority of patients had minor trauma and survived. For pediatric penetrating mechanism injuries, variability in coefficients was large and Ps estimates unreliable.---------- Conclusions: Imputed NTDB coefficients are recommended as the TRISS coefficients 2009 revision for blunt mechanism and adult penetrating mechanism injuries. Coefficients for pediatric penetrating mechanism injuries could not be reliably estimated.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.