899 resultados para Transformada de Wavelet
Resumo:
The theory and approach of the broadband teleseismic body waveform inversion are expatiated in this paper, and the defining the crust structure's methods are developed. Based on the teleseismic P-wave data, the theoretic image of the P-wave radical component is calculated via the convolution of the teleseismic P-wave vertical component and the transform function, and thereby a P-wavefrom inversion method is built. The applied results show the approach effective, stable and its resolution high. The exact and reliable teleseismic P waveforms recorded by CDSN and IRIS and its geodynamics are utilized to obtain China and its vicinage lithospheric transfer functions, this region ithospheric structure is inverted through the inversion of reliable transfer functions, the new knowledge about the deep structure of China and its vicinage is obtained, and the reliable seismological evidence is provided to reveal the geodynamic evolution processes and set up the continental collisional theory. The major studies are as follows: Two important methods to study crustal and upper mantle structure -- body wave travel-time inversion and waveform modeling are reviewed systematically. Based on ray theory, travel-time inversion is characterized by simplicity, crustal and upper mantle velocity model can be obtained by using 1-D travel-time inversion preliminary, which introduces the reference model for studying focal location, focal mechanism, and fine structure of crustal and upper mantle. The large-scale lateral inhomogeneity of crustal and upper mantle can be obtained by three-dimensional t ravel-time seismic tomography. Based on elastic dynamics, through the fitting between theoretical seismogram and observed seismogram, waveform modeling can interpret the detail waveform and further uncover one-dimensional fine structure and lateral variation of crustal and upper mantle, especially the media characteristics of singular zones of ray. Whatever travel-time inversion and waveform modeling is supposed under certain approximate conditions, with respective advantages and disadvantages, and provide convincing structure information for elucidating physical and chemical features and geodynamic processes of crustal and upper mantle. Because the direct wave, surface wave, and refraction wave have lower resolution in investigating seismic velocity transitional zone, which is inadequate to study seismic discontinuities. On the contrary, both the converse and reflected wave, which sample the discontinuities directly, must be carefully picked up from seismogram to constrain the velocity transitional zones. Not only can the converse wave and reflected wave study the crustal structure, but also investigate the upper mantle discontinuities. There are a number of global and regional seismic discontinuities in the crustal and upper mantle, which plays a significant role in understanding physical and chemical properties and geodynamic processes of crustal and upper mantle. The broadband teleseismic P waveform inversion is studied particularly. The teleseismic P waveforms contain a lot of information related to source time function, near-source structure, propagation effect through the mantle, receiver structure, and instrument response, receiver function is isolated form teleseismic P waveform through the vector rotation of horizontal components into ray direction and the deconvolution of vertical component from the radial and tangential components of ground motion, the resulting time series is dominated by local receiver structure effect, and is hardly irrelevant to source and deep mantle effects. Receiver function is horizontal response, which eliminate multiple P wave reflection and retain direct wave and P-S converted waves, and is sensitive to the vertical variation of S wave velocity. Velocity structure beneath a seismic station has different response to radial and vertical component of an accident teleseismic P wave. To avoid the limits caused by a simplified assumption on the vertical response, the receiver function method is mended. In the frequency domain, the transfer function is showed by the ratio of radical response and vertical response of the media to P wave. In the time domain, the radial synthetic waveform can be obtained by the convolution of the transfer function with the vertical wave. In order to overcome the numerical instability, generalized reflection and transmission coefficient matrix method is applied to calculate the synthetic waveform so that all multi-reflection and phase conversion response can be included. A new inversion method, VFSA-LM method, is used in this study, which successfully combines very fast simulated annealing method (VFSA) with damped least square inversion method (LM). Synthetic waveform inversion test confirms its effectiveness and efficiency. Broadband teleseismic P waveform inversion is applied in lithospheric velocity study of China and its vicinage. According to the data of high quality CDSN and IRIS, we obtained an outline map showing the distribution of Asian continental crustal thickness. Based on these results gained, the features of distribution of the crustal thickness and outline of crustal structure under the Asian continent have been analyzed and studied. Finally, this paper advances the principal characteristics of the Asian continental crust. There exist four vast areas of relatively minor variations in the crustal thickness, namely, northern, eastern southern and central areas of Asian crust. As a byproduct, the earthquake location is discussed, Which is a basic issue in seismology. Because of the strong trade-off between the assumed initial time and focal depth and the nonlinear of the inversion problems, this issue is not settled at all. Aimed at the problem, a new earthquake location method named SAMS method is presented, In which, the objective function is the absolute value of the remnants of travel times together with the arrival times and use the Fast Simulated Annealing method is used to inverse. Applied in the Chi-Chi event relocation of Taiwan occurred on Sep 21, 2000, the results show that the SAMS method not only can reduce the effects of the trade-off between the initial time and focal depth, but can get better stability and resolving power. At the end of the paper, the inverse Q filtering method for compensating attenuation and frequency dispersion used in the seismic section of depth domain is discussed. According to the forward and inverse results of synthesized seismic records, our Q filtrating operator of the depth domain is consistent with the seismic laws in the absorbing media, which not only considers the effect of the media absorbing of the waves, but also fits the deformation laws, namely the frequency dispersion of the body wave. Two post stacked profiles about 60KM, a neritic area of China processed, the result shows that after the forward Q filtering of the depth domain, the wide of the wavelet of the middle and deep layers is compressed, the resolution and signal noise ratio are enhanced, and the primary sharp and energy distribution of the profile are retained.
Resumo:
The dissertation addressed the problems of signals reconstruction and data restoration in seismic data processing, which takes the representation methods of signal as the main clue, and take the seismic information reconstruction (signals separation and trace interpolation) as the core. On the natural bases signal representation, I present the ICA fundamentals, algorithms and its original applications to nature earth quake signals separation and survey seismic signals separation. On determinative bases signal representation, the paper proposed seismic dada reconstruction least square inversion regularization methods, sparseness constraints, pre-conditioned conjugate gradient methods, and their applications to seismic de-convolution, Radon transformation, et. al. The core contents are about de-alias uneven seismic data reconstruction algorithm and its application to seismic interpolation. Although the dissertation discussed two cases of signal representation, they can be integrated into one frame, because they both deal with the signals or information restoration, the former reconstructing original signals from mixed signals, the later reconstructing whole data from sparse or irregular data. The goal of them is same to provide pre-processing methods and post-processing method for seismic pre-stack depth migration. ICA can separate the original signals from mixed signals by them, or abstract the basic structure from analyzed data. I surveyed the fundamental, algorithms and applications of ICA. Compared with KL transformation, I proposed the independent components transformation concept (ICT). On basis of the ne-entropy measurement of independence, I implemented the FastICA and improved it by covariance matrix. By analyzing the characteristics of the seismic signals, I introduced ICA into seismic signal processing firstly in Geophysical community, and implemented the noise separation from seismic signal. Synthetic and real data examples show the usability of ICA to seismic signal processing and initial effects are achieved. The application of ICA to separation quake conversion wave from multiple in sedimentary area is made, which demonstrates good effects, so more reasonable interpretation of underground un-continuity is got. The results show the perspective of application of ICA to Geophysical signal processing. By virtue of the relationship between ICA and Blind Deconvolution , I surveyed the seismic blind deconvolution, and discussed the perspective of applying ICA to seismic blind deconvolution with two possible solutions. The relationship of PC A, ICA and wavelet transform is claimed. It is proved that reconstruction of wavelet prototype functions is Lie group representation. By the way, over-sampled wavelet transform is proposed to enhance the seismic data resolution, which is validated by numerical examples. The key of pre-stack depth migration is the regularization of pre-stack seismic data. As a main procedure, seismic interpolation and missing data reconstruction are necessary. Firstly, I review the seismic imaging methods in order to argue the critical effect of regularization. By review of the seismic interpolation algorithms, I acclaim that de-alias uneven data reconstruction is still a challenge. The fundamental of seismic reconstruction is discussed firstly. Then sparseness constraint on least square inversion and preconditioned conjugate gradient solver are studied and implemented. Choosing constraint item with Cauchy distribution, I programmed PCG algorithm and implement sparse seismic deconvolution, high resolution Radon Transformation by PCG, which is prepared for seismic data reconstruction. About seismic interpolation, dealias even data interpolation and uneven data reconstruction are very good respectively, however they can not be combined each other. In this paper, a novel Fourier transform based method and a algorithm have been proposed, which could reconstruct both uneven and alias seismic data. I formulated band-limited data reconstruction as minimum norm least squares inversion problem where an adaptive DFT-weighted norm regularization term is used. The inverse problem is solved by pre-conditional conjugate gradient method, which makes the solutions stable and convergent quickly. Based on the assumption that seismic data are consisted of finite linear events, from sampling theorem, alias events can be attenuated via LS weight predicted linearly from low frequency. Three application issues are discussed on even gap trace interpolation, uneven gap filling, high frequency trace reconstruction from low frequency data trace constrained by few high frequency traces. Both synthetic and real data numerical examples show the proposed method is valid, efficient and applicable. The research is valuable to seismic data regularization and cross well seismic. To meet 3D shot profile depth migration request for data, schemes must be taken to make the data even and fitting the velocity dataset. The methods of this paper are used to interpolate and extrapolate the shot gathers instead of simply embedding zero traces. So, the aperture of migration is enlarged and the migration effect is improved. The results show the effectiveness and the practicability.
Resumo:
The topic of this study is about the propagation features of elastic waves in the anisotropic and nonlinear media by numerical methods with high accuracy and stability. The main achievements of this paper are as followings: Firstly, basing on the third order elastic energy formula, principle of energy conservation and circumvolved matrix method, we firstly reported the equations of non-linear elastic waves with two dimensions and three components in VTI media. Secondly, several conclusions about some numerical methods have been obtained in this paper. Namely, the minimum suitable sample stepth in space is about 1/8-1/12 of the main wavelength in order to distinctly reduce the numerical dispersion resulted from the numerical mehtod, at the same time, the higher order conventional finite difference (CFD) schemes will give little contribution to avoid the numerical solutions error accumulating with time. To get the similar accuracy with the fourth order center finite difference method, the half truncation length of SFFT should be no less than 7. The FDFCT method can present with the numerical solutions without obvious dispersion when the paprameters of FCT is suitable (we think they should be in the scope from 0.0001 to 0.07). Fortunately, the NADM method not only can reported us with the higher order accuracy solutions (higher than that of the fourth order finite difference method and lower than that of the sixth order finite difference method), but also can distinctly reduce the numerical dispersion. Thirdly, basing on the numerial and theoretical analysis, we reported such nonlinear response accumulating with time as waveform aberration, harmonic generation and resonant peak shift shown by the propagation of one- and two-dimensional non-linear elasticwaves in this paper. And then, we drew the conclusion that these nonlinear responses are controlled by the product between nonlinear strength (SN) and the amplitude of the source. At last, the modified FDFCT numerical method presented by this paper is used to model the two-dimensional non-linear elastic waves propagating in VTI media. Subsequently, the wavelet analysis and polarization are adopted to investigate and understand the numerical results. And then, we found the following principles (attention: the nonlinear strength presented by this paper is weak, the thickness of the -nonlinear media is thin (200m), the initial energy of the source is weak and the anisotropy of the media is weak too): The non-linear response shown by the elastic waves in VTI media is anisotropic too; The instantaneous main frequency sections of seismic records resulted from the media with a non-linear layer have about 1/4 to 1/2 changes of the initial main frequency of source with that resulted from the media without non-linear layer; The responses shown by the elasic waves about the anisotropy and nonlinearity have obvious mutual reformation, namely, the non-linear response will be stronger in some directions because of the anisotropy and the anisotropic strength shown by the elastic waves will be stronger when the media is nonlinear.
Resumo:
This thesis mainly talks about the wavelet transfrom and the frequency division method. It describes the frequency division processing on prestack or post-stack seismic data and application of inversion noise attenuation, frequency division residual static correction and high resolution data in reservoir inversion. This thesis not only describes the frequency division and inversion in theory, but also proves it by model calculation. All the methods are integrated together. The actual data processing demonstrates the applying results. This thesis analyzes the differences and limitation between t-x prediction filter and f-x prediction filter noise attenuation from wavelet transform theory. It considers that we can do the frequency division attenuation process of noise and signal by wavelet frequency division theory according to the differences of noise and signal in phase, amplitude and frequency. By comparison with the f-x coherence noise, removal method, it approves the effects and practicability of frequency division in coherence and random noise isolation. In order to solve the side effects in non-noise area, we: take the area constraint method and only apply the frequency division processing in the noise area. So it can solve the problem of low frequency loss in non-noise area. The residual moveout differences in seismic data processing have a great effect on stack image and resolutions. Different frequency components have different residual moveout differences. The frequency division residual static correction realizes the frequency division and the calculation of residual correction magnitude. It also solves the problems of different residual correction magnitude in different frequency and protects the high frequency information in data. By actual data processing, we can get good results in phase residual moveout differences elimination of pre-stack data, stack image quality and improvement of data resolution. This thesis analyses the characters of the random noises and its descriptions in time domain and frequency domain. Furthermore it gives the inversion prediction solution methods and realizes the frequency division inversion attenuation of the random noise. By the analysis of results of the actual data processing, we show that the noise removed by inversion has its own advantages. By analyzing parameter's about resolution and technology of high resolution data processing, this thesis describes the relations between frequency domain and resolution, parameters about resolution and methods to increase resolution. It also gives the processing flows of the high resolution data; the effect and influence of reservoir inversion caused by high resolution data. Finally it proves the accuracy and precision of the reservoir inversion results. The research results of this thesis reveal that frequency division noise attenuation, frequency residual correction and inversion noise attenuation are effective methods to increase the SNR and resolution of seismic data.
Resumo:
Ordos Basin is a typical cratonic petroliferous basin with 40 oil-gas bearing bed sets. It is featured as stable multicycle sedimentation, gentle formation, and less structures. The reservoir beds in Upper Paleozoic and Mesozoicare are mainly low density, low permeability, strong lateral change, and strong vertical heterogeneous. The well-known Loess Plateau in the southern area and Maowusu Desert, Kubuqi Desert and Ordos Grasslands in the northern area cover the basin, so seismic data acquisition in this area is very difficult and the data often takes on inadequate precision, strong interference, low signal-noise ratio, and low resolution. Because of the complicated condition of the surface and the underground, it is very difficult to distinguish the thin beds and study the land facies high-resolution lithologic sequence stratigraphy according to routine seismic profile. Therefore, a method, which have clearly physical significance, based on advanced mathematical physics theory and algorithmic and can improve the precision of the detection on the thin sand-peat interbed configurations of land facies, is in demand to put forward.Generalized S Transform (GST) processing method provides a new method of phase space analysis for seismic data. Compared with wavelet transform, both of them have very good localization characteristics; however, directly related to the Fourier spectra, GST has clearer physical significance, moreover, GST adopts a technology to best approach seismic wavelets and transforms the seismic data into time-scale domain, and breaks through the limit of the fixed wavelet in S transform, so GST has extensive adaptability. Based on tracing the development of the ideas and theories from wavelet transform, S transform to GST, we studied how to improve the precision of the detection on the thin stratum by GST.Noise has strong influence on sequence detecting in GST, especially in the low signal-noise ratio data. We studied the distribution rule of colored noise in GST domain, and proposed a technology to distinguish the signal and noise in GST domain. We discussed two types of noises: white noise and red noise, in which noise satisfy statistical autoregression model. For these two model, the noise-signal detection technology based on GST all get good result. It proved that the GST domain noise-signal detection technology could be used to real seismic data, and could effectively avoid noise influence on seismic sequence detecting.On the seismic profile after GST processing, high amplitude energy intensive zone, schollen, strip and lentoid dead zone and disarray zone maybe represent specifically geologic meanings according to given geologic background. Using seismic sequence detection profile and combining other seismic interpretation technologies, we can elaborate depict the shape of palaeo-geomorphology, effectively estimate sand stretch, distinguish sedimentary facies, determine target area, and directly guide oil-gas exploration.In the lateral reservoir prediction in XF oilfield of Ordos Basin, it played very important role in the estimation of sand stretch that the study of palaeo-geomorphology of Triassic System and the partition of inner sequence of the stratum group. According to the high-resolution seismic profile after GST processing, we pointed out that the C8 Member of Yanchang Formation in DZ area and C8 Member in BM area are the same deposit. It provided the foundation for getting 430 million tons predicting reserves and unite building 3 million tons off-take potential.In tackling key problem study for SLG gas-field, according to the high-resolution seismic sequence profile, we determined that the deposit direction of H8 member is approximately N-S or NNE-SS W. Using the seismic sequence profile, combining with layer-level profile, we can interpret the shape of entrenched stream. The sunken lenticle indicates the high-energy stream channel, which has stronger hydropower. By this way we drew out three high-energy stream channels' outline, and determined the target areas for exploitation. Finding high-energy braided river by high-resolution sequence processing is the key technology in SLG area.In ZZ area, we studied the distribution of the main reservoir bed-S23, which is shallow delta thin sand bed, by GST processing. From the seismic sequence profile, we discovered that the schollen thick sand beds are only local distributed, and most of them are distributary channel sand and distributary bar deposit. Then we determined that the S23 sand deposit direction is NW-SE in west, N-S in central and NE-SW in east. The high detecting seismic sequence interpretation profiles have been tested by 14 wells, 2 wells mismatch and the coincidence rate is 85.7%. Based on the profiles we suggested 3 predicted wells, one well (Yu54) completed and the other two is still drilling. The completed on Is coincident with the forecastThe paper testified that GST is a effective technology to get high- resolution seismic sequence profile, compartmentalize deposit microfacies, confirm strike direction of sandstone and make sure of the distribution range of oil-gas bearing sandstone, and is the gordian technique for the exploration of lithologic gas-oil pool in complicated areas.
Resumo:
A mandioca (Manihot esculenta Crantz) é cultivada em todas as regiões do Brasil, desempenhando papel importante na alimentação humana e animal, como matéria-prima para vários produtos industriais e na geração de emprego e de renda. O Brasil ocupa a segunda posição na produção mundial de mandioca (12,7% do total), com área cultivada de cerca de 1,7 milhões de hectares, produção da ordem de 22,6 milhões de toneladas de raízes e produtividade média de 13,3 t/ha. Dentre os principais estados produtores destacam-se: Pará (18,0%), Bahia (16,3%), Paraná (12,5%), Rio Grande do Sul (5,0%) e Amazonas (4,3%), que respondem por 56,1% da produção do país. Estima-se que, nas fases de produção primária e no processamento de farinha e fécula, gera-se em torno de um milhão de empregos diretos e que a atividade mandioqueira proporciona receita bruta anual equivalente a 2,5 bilhões de dólares e uma contribuição tributária de 150 milhões de dólares; a produção que é transformada em farinha e fécula gera, respectivamente, receitas equivalentes a 600 milhões e 150 milhões de dólares. A Região Nordeste destaca-se com uma participação de 36,8% da produção nacional; as demais regiões participam com 28,7% (Norte), 19,7% (Sul), 8,8% (Sudeste) e 6,0% (Centro-Oeste).
Resumo:
Este trabalho tem como objetivo o desenvolvimento de curvas de calibração por espectroscopia de reflectância no infravermelho próximo (NIRS) para os teores de matéria seca, proteína e fósforo em amostras de milho processado. Neste trabalho, foi utilizada a espectroscopia no infravermelho com Transformada de Fourier aplicando a técnica de reflectância difusa, cujos dados espectrais foram correlacionados aos valores nutricionais do milho através do método de regressão dos mínimos quadrados parciais (PLS) e diferentes pré-tratamentos matemáticos nos espectros. Para a construção de modelo de calibração, foram utilizados os dados de referência de análises químicas dos valores do teor de matéria seca, proteína bruta e fósforo (P) de 191 amostras de milho em grão de diferentes procedências e variedades. Destas amostras, 114 foram usadas para o modelo de calibração, 48 para validação. A espectroscopia de reflectância no infravermelho próximo, associada ao método de calibração multivariada (PLS), é uma técnica alternativa viável para a determinação do teor de proteína total e matéria seca em amostras de milho moído. As curvas ajustadas para proteína bruta, matéria seca e fósforo apresentaram performance adequada para utilização em amostras provenientes de ensaios de screening ou onde se tem grande número de repetições de amostras por tratamentos. Para utilização em determinações analíticas, como método de rotina laboratorial, os modelos de calibração devem ser aprimorados.
Resumo:
Os principais desafios relacionados ao problema de classificação de enzimas em banco de dados de estruturas de proteínas são: 1) o ruído presente nos dados; 2) o grande número de variáveis; 3) o número não-balanceado de membros por classe. Para abordar esses desafios, apresenta-se uma metodologia para seleção de parâmetros, que combina recursos de matemática (ex: Transformada Discreta do Cosseno) e da estatística (ex:.g., correlação de variáveis e amostragem com reposição). A metodologia foi validada considerando-se os três principais métodos de classificação da literatura, a saber; árvore de decisão, classificação Bayesiana e redes neurais. Os experimentos demonstram que essa metodologia é simples, eficiente e alcança resultados semelhantes àqueles obtidos pelas principais técnicas para seleção de parâmetros na literatura.Termos para indexação classificação de enzimas,predição de função de proteínas, estruturas de proteínas, banco de dados de proteínas, seleção de parâmetros, métodos para classsificação de dados.
Resumo:
Wydział Fizyki:Instytut Obserwatorium Astronomiczne
Resumo:
A number of problems in network operations and engineering call for new methods of traffic analysis. While most existing traffic analysis methods are fundamentally temporal, there is a clear need for the analysis of traffic across multiple network links — that is, for spatial traffic analysis. In this paper we give examples of problems that can be addressed via spatial traffic analysis. We then propose a formal approach to spatial traffic analysis based on the wavelet transform. Our approach (graph wavelets) generalizes the traditional wavelet transform so that it can be applied to data elements connected via an arbitrary graph topology. We explore the necessary and desirable properties of this approach and consider some of its possible realizations. We then apply graph wavelets to measurements from an operating network. Our results show that graph wavelets are very useful for our motivating problems; for example, they can be used to form highly summarized views of an entire network's traffic load, to gain insight into a network's global traffic response to a link failure, and to localize the extent of a failure event within the network.
Resumo:
p.13-27
Resumo:
This paper investigates the use of the acoustic emission (AE) monitoring technique for use in identifying the damage mechanisms present in paper associated with its production process. The microscopic structure of paper consists of a random mesh of paper fibres connected by hydrogen bonds. This implies the existence of two damage mechanisms, the failure of a fibre-fibre bond and the failure of a fibre. This paper describes a hybrid mathematical model which couples the mechanics of the mass-spring model to the acoustic wave propagation model for use in generating the acoustic signal emitted by complex structures of paper fibres under strain. The derivation of the mass-spring model can be found in [1,2], with details of the acoustic wave equation found in [3,4]. The numerical implementation of the vibro-acoustic model is discussed in detail with particular emphasis on the damping present in the numerical model. The hybrid model uses an implicit solver which intrinsically introduces artificial damping to the solution. The artificial damping is shown to affect the frequency response of the mass-spring model, therefore certain restrictions on the simulation time step must be enforced so that the model produces physically accurate results. The hybrid mathematical model is used to simulate small fibre networks to provide information on the acoustic response of each damage mechanism. The simulated AEs are then analysed using a continuous wavelet transform (CWT), described in [5], which provides a two dimensional time-frequency representation of the signal. The AEs from the two damage mechanisms show different characteristics in the CWT so that it is possible to define a fibre-fibre bond failure by the criteria listed below. The dominant frequency components of the AE must be at approximately 250 kHz or 750 kHz. The strongest frequency component may be at either approximately 250 kHz or 750 kHz. The duration of the frequency component at approximately 250 kHz is longer than that of the frequency component at approximately 750 kHz. Similarly, the criteria for identifying a fibre failure are given below. The dominant frequency component of the AE must be greater than 800 kHz. The duration of the dominant frequency component must be less than 5.00E-06 seconds. The dominant frequency component must be present at the front of the AE. Essentially, the failure of a fibre-fibre bond produces a low frequency wave and the failure of a fibre produces a high frequency pulse. Using this theoretical criteria, it is now possible to train an intelligent classifier such as the Self-Organising Map (SOM) [6] using the experimental data. First certain features must be extracted from the CWTs of the AEs for use in training the SOM. For this work, each CWT is divided into 200 windows of 5E-06s in duration covering a 100 kHz frequency range. The power ratio for each windows is then calculated and used as a feature. Having extracted the features from the AEs, the SOM can now be trained, but care is required so that the both damage mechanisms are adequately represented in the training set. This is an issue with paper as the failure of the fibre-fibre bonds is the prevalent damage mechanism. Once a suitable training set is found, the SOM can be trained and its performance analysed. For the SOM described in this work, there is a good chance that it will correctly classify the experimental AEs.
Resumo:
This paper will analyse two of the likely damage mechanisms present in a paper fibre matrix when placed under controlled stress conditions: fibre/fibre bond failure and fibre failure. The failure process associated with each damage mechanism will be presented in detail focusing on the change in mechanical and acoustic properties of the surrounding fibre structure before and after failure. To present this complex process mathematically, geometrically simple fibre arrangements will be chosen based on certain assumptions regarding the structure and strength of paper, to model the damage mechanisms. The fibre structures are then formulated in terms of a hybrid vibro-acoustic model based on a coupled mass/spring system and the pressure wave equation. The model will be presented in detail in the paper. The simulation of the simple fibre structures serves two purposes; it highlights the physical and acoustic differences of each damage mechanism before and after failure, and also shows the differences in the two damage mechanisms when compared with one another. The results of the simulations are given in the form of pressure wave contours, time-frequency graphs and the Continuous Wavelet Transform (CWT) diagrams. The analysis of the results leads to criteria by which the two damage mechanisms can be identified. Using these criteria it was possible to verify the results of the simulations against experimental acoustic data. The models developed in this study are of specific practical interest in the paper-making industry, where acoustic sensors may be used to monitor continuous paper production. The same techniques may be adopted more generally to correlate acoustic signals to damage mechanisms in other fibre-based structures.
Resumo:
A Concise Intro to Image Processing using C++ presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noising methods based on partial differential equations, and new image compression methods such as fractal image compression and wavelet compression. It includes elementary concepts of image processing and related fundamental tools with coding examples as well as exercises. With a particular emphasis on illustrating fractal and wavelet compression algorithms, the text covers image segmentation, object recognition, and morphology. An accompanying CD-ROM contains code for all algorithms.
Resumo:
The grading of crushed aggregate is carried out usually by sieving. We describe a new image-based approach to the automatic grading of such materials. The operational problem addressed is where the camera is located directly over a conveyor belt. Our approach characterizes the information content of each image, taking into account relative variation in the pixel data, and resolution scale. In feature space, we find very good class separation using a multidimensional linear classifier. The innovation in this work includes (i) introducing an effective image-based approach into this application area, and (ii) our supervised classification using wavelet entropy-based features.