979 resultados para Chemometrics, Data pretreatment, variate calibration, variate curve resolution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inversion of canopy reflectance models is widely used for the retrieval of vegetation properties from remote sensing. This study evaluates the retrieval of soybean biophysical variables of leaf area index, leaf chlorophyll content, canopy chlorophyll content, and equivalent leaf water thickness from proximal reflectance data integrated broadbands corresponding to moderate resolution imaging spectroradiometer, thematic mapper, and linear imaging self scanning sensors through inversion of the canopy radiative transfer model, PROSAIL. Three different inversion approaches namely the look-up table, genetic algorithm, and artificial neural network were used and performances were evaluated. Application of the genetic algorithm for crop parameter retrieval is a new attempt among the variety of optimization problems in remote sensing which have been successfully demonstrated in the present study. Its performance was as good as that of the look-up table approach and the artificial neural network was a poor performer. The general order of estimation accuracy for para-meters irrespective of inversion approaches was leaf area index > canopy chlorophyll content > leaf chlorophyll content > equivalent leaf water thickness. Performance of inversion was comparable for broadband reflectances of all three sensors in the optical region with insignificant differences in estimation accuracy among them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advances in technology, seismological theory, and data acquisition, a number of high-resolution seismic tomography models have been published. However, discrepancies between tomography models often arise from different theoretical treatments of seismic wave propagation, different inversion strategies, and different data sets. Using a fixed velocity-to-density scaling and a fixed radial viscosity profile, we compute global mantle flow models associated with the different tomography models and test the impact of these for explaining surface geophysical observations (geoid, dynamic topography, stress, and strain rates). We use the joint modeling of lithosphere and mantle dynamics approach of Ghosh and Holt (2012) to compute the full lithosphere stresses, except that we use HC for the mantle circulation model, which accounts for the primary flow-coupling features associated with density-driven mantle flow. Our results show that the seismic tomography models of S40RTS and SAW642AN provide a better match with surface observables on a global scale than other models tested. Both of these tomography models have important similarities, including upwellings located in Pacific, Eastern Africa, Iceland, and mid-ocean ridges in the Atlantic and Indian Ocean and downwelling flows mainly located beneath the Andes, the Middle East, and central and Southeast Asia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report contains the frrst observations made for the Modis Optical Characterization Experiment (MOCE). Data presented here were obtained on the R/V DeSteiguer between 28 August and 8 October along the central California coast and in Monterey Bay. Three types of data are reported here: high spectral resolution radiometry at three depths for seven stations; salinity, temperature, fluorescence and beam attenuation profiles at the same stations; and total suspended matter and suspended organic carbon and nitrogen. [PDF contans 164 pages]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Alliance for Coastal Technologies (ACT) Workshop "Applications of in situ Fluorometers in Nearshore Waters" was held in Cape Elizabeth, Maine, February 2-4,2005, with sponsorship by the Gulf of Maine Ocean Observing System (GoMOOS), one of the ACT partner organization. The purpose of the workshop was to explore recent trends in fluorometry as it relates to resource management applications in nearshore environments. Participants included representatives from state and federal environmental management agencies as well as research institutions, many of whom are currently using this technology in their research and management applications. Manufacturers and developers of fluorometric measuring systems also attended the meeting. The Workshop attendees discussed the historical and present uses of fluorometry technology and identified the great potential for its use by coastal managers to fulfill their regulatory and management objectives. Participants also identified some of the challenges associated with the correct use of Fluorometers to estimate biomass and the rate of primary productivity. The Workshop concluded that in order to expand the existing use of fluorometers in both academic and resource management disciplines, several issues concerning data collection, instrument calibration, and data interpretation needed to be addressed. Participants identified twelve recommendations, the top five of which are listed below: Recommendations 1) Develop a "Guide" that describes the most important aspects of fluorescence measurements. This guide should be written by an expert party, with both research and industry input, and should be distributed by all manufacturers with their instrumentation. The guide should also be made available on the ACT website as well as those of other relevant organizations. The guide should include discussions on the following topics: The benefits of using fluorometers in research and resource management applications; What fluorometers can and cannot provide in terms of measurements; The necessary assumptions required before applying fluorometry; Characterization and calibration of fluorometers; (pdf contains 32 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new method for designing three-zone optical pupil filter is presented. The phase-only optical pupil filter and the amplitude-only optical pupil filters were designed. The first kind of pupil for optical data storage can increase the transverse resolution. The second kind of pupil filter can increase the axial and transverse resolution at the same time, which is applicable in three-dimension imaging in confocal microscopy. (C) 2007 Elsevier GmbH. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A series of eight related analogs of distamycin A has been synthesized. Footprinting and affinity cleaving reveal that only two of the analogs, pyridine-2- car box amide-netropsin (2-Py N) and 1-methylimidazole-2-carboxamide-netrops in (2-ImN), bind to DNA with a specificity different from that of the parent compound. A new class of sites, represented by a TGACT sequence, is a strong site for 2-PyN binding, and the major recognition site for 2-ImN on DNA. Both compounds recognize the G•C bp specifically, although A's and T's in the site may be interchanged without penalty. Additional A•T bp outside the binding site increase the binding affinity. The compounds bind in the minor groove of the DNA sequence, but protect both grooves from dimethylsulfate. The binding evidence suggests that 2-PyN or 2-ImN binding induces a DNA conformational change.

In order to understand this sequence specific complexation better, the Ackers quantitative footprinting method for measuring individual site affinity constants has been extended to small molecules. MPE•Fe(II) cleavage reactions over a 10^5 range of free ligand concentrations are analyzed by gel electrophoresis. The decrease in cleavage is calculated by densitometry of a gel autoradiogram. The apparent fraction of DNA bound is then calculated from the amount of cleavage protection. The data is fitted to a theoretical curve using non-linear least squares techniques. Affinity constants at four individual sites are determined simultaneously. The distamycin A analog binds solely at A•T rich sites. Affinities range from 10^(6)- 10^(7)M^(-1) The data for parent compound D fit closely to a monomeric binding curve. 2-PyN binds both A•T sites and the TGTCA site with an apparent affinity constant of 10^(5) M^(-1). 2-ImN binds A•T sites with affinities less than 5 x 10^(4) M^(-1). The affinity of 2-ImN for the TGTCA site does not change significantly from the 2-PyN value. At the TGTCA site, the experimental data fit a dimeric binding curve better than a monomeric curve. Both 2-PyN and 2-ImN have substantially lower DNA affinities than closely related compounds.

In order to probe the requirements of this new binding site, fourteen other derivatives have been synthesized and tested. All compounds that recognize the TGTCA site have a heterocyclic aromatic nitrogen ortho to the N or C-terminal amide of the netropsin subunit. Specificity is strongly affected by the overall length of the small molecule. Only compounds that consist of at least three aromatic rings linked by amides exhibit TGTCA site binding. Specificity is only weakly altered by substitution on the pyridine ring, which correlates best with steric factors. A model is proposed for TGTCA site binding that has as its key feature hydrogen bonding to both G's by the small molecule. The specificity is determined by the sequence dependence of the distance between G's.

One derivative of 2-PyN exhibits pH dependent sequence specificity. At low pH, 4-dimethylaminopyridine-2-carboxamide-netropsin binds tightly to A•T sites. At high pH, 4-Me_(2)NPyN binds most tightly to the TGTCA site. In aqueous solution, this compound protonates at the pyridine nitrogen at pH 6. Thus presence of the protonated form correlates with A•T specificity.

The binding site of a class of eukaryotic transcriptional activators typified by yeast protein GCN4 and the mammalian oncogene Jun contains a strong 2-ImN binding site. Specificity requirements for the protein and small molecule are similar. GCN4 and 2-lmN bind simultaneously to the same binding site. GCN4 alters the cleavage pattern of 2-ImN-EDTA derivative at only one of its binding sites. The details of the interaction suggest that GCN4 alters the conformation of an AAAAAAA sequence adjacent to its binding site. The presence of a yeast counterpart to Jun partially blocks 2-lmN binding. The differences do not appear to be caused by direct interactions between 2-lmN and the proteins, but by induced conformational changes in the DNA protein complex. It is likely that the observed differences in complexation are involved in the varying sequence specificity of these proteins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nas últimas décadas, teorias têm sido formuladas para interpretar o comportamento de solos não saturados e estas têm se mostrado coerentes com resultados experimentais. Paralelamente, várias técnicas de campo e de laboratório têm sido desenvolvidas. No entanto, a determinação experimental dos parâmetros dos solos não saturados é cara, morosa, exige equipamentos especiais e técnicos experientes. Como resultado, essas teorias têm aplicação limitada a pesquisas acadêmicas e são pouco utilizados na prática da engenharia. Para superar este problema, vários pesquisadores propuseram equações para representar matematicamente o comportamento de solos não saturados. Estas proposições são baseadas em índices físicos, caracterização do solo, em ensaios convencionais ou simplesmente em ajustes de curvas. A relação entre a umidade e a sucção matricial, convencionalmente denominada curva característica de sucção do solo (SWCC) é também uma ferramenta útil na previsão do comportamento de engenharia de solos não saturados. Existem muitas equações para representar matematicamente a SWCC. Algumas são baseadas no pressuposto de que sua forma está diretamente relacionada com a distribuição dos poros e, portanto, com a granulometria. Nestas proposições, os parâmetros são calibrados pelo ajuste da curva de dados experimentais. Outros métodos supõem que a curva pode ser estimada diretamente a partir de propriedades físicas dos solos. Estas propostas são simples e conveniente para a utilização prática, mas são substancialmente incorretas, uma vez que ignoram a influência do teor de umidade, nível de tensões, estrutura do solo e mineralogia. Como resultado, a maioria tem sucesso limitado, dependendo do tipo de solo. Algumas tentativas têm sido feitas para prever a variação da resistência ao cisalhamento com relação a sucção matricial. Estes procedimentos usam, como uma ferramenta, direta ou indiretamente, a SWCC em conjunto com os parâmetros efetivos de resistência c e . Este trabalho discute a aplicabilidade de três equações para previsão da SWCC (Gardner, 1958; van Genuchten, 1980; Fredlund; Xing, 1994) para vinte e quatro amostras de solos residuais brasileiros. A adequação do uso da curva característica normalizada, proposta por Camapum de Carvalho e Leroueil (2004), também foi investigada. Os parâmetros dos modelos foram determinados por ajuste de curva, utilizando técnicas de problema inverso; dois métodos foram usados: algoritmo genético (AG) e Levenberq-Marquardt. Vários parâmetros que influênciam o comportamento da SWCC são discutidos. A relação entre a sucção matricial e resistência ao cisalhamento foi avaliada através de ajuste de curva utilizando as equações propostas por Öberg (1995); Sällfors (1997), Vanapalli et al., (1996), Vilar (2007); Futai (2002); oito resultados experimentais foram analisados. Os vários parâmetros que influênciam a forma da SWCC e a parcela não saturadas da resistência ao cisalhamento são discutidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational fluid dynamics (CFD) simulations are becoming increasingly widespread with the advent of more powerful computers and more sophisticated software. The aim of these developments is to facilitate more accurate reactor design and optimization methods compared to traditional lumped-parameter models. However, in order for CFD to be a trusted method, it must be validated using experimental data acquired at sufficiently high spatial resolution. This article validates an in-house CFD code by comparison with flow-field data obtained using magnetic resonance imaging (MRI) for a packed bed with a particle-to-column diameter ratio of 2. Flows characterized by inlet Reynolds numbers, based on particle diameter, of 27, 55, 111, and 216 are considered. The code used employs preconditioning to directly solve for pressure in low-velocity flow regimes. Excellent agreement was found between the MRI and CFD data with relative error between the experimentally determined and numerically predicted flow-fields being in the range of 3-9%. © 2012 American Institute of Chemical Engineers (AIChE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

数值模式是潮波研究的一种有利手段,但在研究中会面临各种具体问题,包括开边界条件的确定、底摩擦系数和耗散系数的选取等。数据同化是解决这些问题的一种途径,即利用有限数量的潮汐观测资料对潮波进行最优估计,其根本目的是迫使模型预报值逼近观测值,使模式不要偏离实际情况太远。本文采用了一种优化开边界方法,沿着数值模型的开边界优化潮汐水位信息,目的是设法使数值解在动力约束的意义下接近观测值,获得研究区域的潮汐结果。边界值由指定优化问题的解来定,以提高模拟区域的潮汐精度,最优问题的解是基于通过开边界的能量通量的变化,处理开边界处的观测值与计算值之差的最小化。这里提供了辐射型边界条件,由Reid 和Bodine(本文简称为RB)推导,我们将采用的优化后的RB方法(称为ORB)是优化开边界的特殊情况。 本文对理想矩形海域( E- E, N- N, 分辨率 )进行了潮波模拟,有东部开边界,模式采用ECOM3D模式。对数据结果的误差分析采用,振幅平均偏差,平均绝对偏差,平均相对误差和均方根偏差四个值来衡量模拟结果的好坏程度。 需要优化入开边界的解析潮汐值本文采用的解析解由方国洪《海湾的潮汐与潮流》(1966年)方法提供,为验证本文所做的解析解和方文的一致,本文做了其第一个例子的关键值a,b,z,结果与其结果吻合的相当好。但略有差别,分析的可能原因是两法在具体迭代方案和计算机保留小数上有区别造成微小误差。另外,我们取m=20,得到更精确的数值,我们发现对前十项的各项参数值,取m=10,m=20各项参数略有改进。当然我们可以获得m更大的各项参数值。 同时为了检验解析解的正确性讨论m和l变化对边界值的影响,结果指出,增大m,m=20时,u的模最大在本身u1或u2的模的6%;m=100时,u的模最大在本身u1或u2的模的4%;m再增大,m=1000时,u的模最大在本身u1或u2的模的4%,改变不大。当l<1时, =0处u的模最大为2。当l=1时, =0处u的模最大为0.1,当l>1时,l越大,u的模越小,当l=10时,u的模最大为0.001,可以认为为0。 为检验该优化方法的应用情况,我们对理想矩形区域进行模拟,首先将本文所采用的优化开边界方法应用于30m的情况,在开边界优化入开边界得出模式解,所得模拟结果与解析解吻合得相当好,该模式解和解析解在整个区域上,振幅平均绝对偏差为9.9cm,相位平均绝对偏差只有4.0 ,均方根偏差只有13.3cm,说明该优化方法在潮波模型中有效。 为验证该优化方法在各种条件下的模拟结果情况,在下面我们做了三类敏感性试验: 第一类试验:为证明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,我们来比较ORB条件与RB条件的优劣,我们模拟用了两个不同的摩擦系数,k分别为:0,0.00006。 结果显示,针对不同摩擦系数,显示在开边界上使用ORB条件的解比使用RB条件的解无论是振幅还是相位都有显著改善,两个试验均方根偏差优化程度分别为84.3%,83.7%。说明在开边界上使用优化方法相比于没有采用优化方法的模拟解更接近于解析解,大大提高了模拟水平。上述的两个试验得出, k=0.00006优化结果比k=0的好。 第二类试验,使用ORB条件确定优化开边界情况下,在东西边界加入出入流的情况,流考虑线性和非线性情况,结果显示,加入流的情况,潮汐模拟的效果降低不少,流为1Sv的情况要比5Sv的情况均方根偏差相差20cm,而不加流的情况只有0.2cm。线性流和非线性流情况两者模式解相差不大,振幅,相位各项指数都相近, 说明流的线性与否对结果影响不大。 第三类试验,不仅在开边界使用ORB条件,在模式内部也使用ORB条件,比较了内部优化和不优化情况与解析解的偏差。结果显示,选用不同的k,振幅都能得到很好的模拟,而相位相对较差。另外,在内部优化的情况下,考虑不同的k的模式解, 我们选用了与解析解相近的6个模式解的k,结果显示,不同的k,振幅都能得到很好的模拟,而相位较差。 总之,在开边界使用ORB条件比使用RB条件好,振幅相位都有大幅度改进,在加入出入流情况下,流的大小对模拟结果有影响,但线形流和非线性流差别不大。内部优化的结果显示,模式采用不同的k都能很好模拟解析解的振幅。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Oil field exploration and exploitation, the problem of supervention and enhaning combination gas recovery was faced.then proposing new and higher demands to precision of seismic data. On the basis of studying exploration status,resource potential,and quality of 3D seismic data to internal representative mature Oil field, taking shengli field ken71 zone as study object, this paper takes advantage of high-density 3D seismic technique to solving the complex geologic problem in exploration and development of mature region, deep into researching the acquisition, processing of high-density 3D seismic data. This disseration study the function of routine 3D seismic, high-density 3D seismic, 3D VSP seismic,and multi-wave multi-component seismic to solving the geologic problem in exploration and development of mature region,particular introduce the advantage and shortage of high-density 3D seismic exploration, put forward the integrated study method of giving priority to high-density 3D seismic and combining other seismic data in enhancing exploration accuracy of mature region. On the basis of detailedly studying acquisition method of high-density 3D seismic and 3D VSP seismic,aming at developing physical simulation and numeical simulation to designing and optimizing observation system. Optimizing “four combination” whole acquisition method of acquisition of well with ground seimic and “three synchron”technique, realizing acquisition of combining P-wave with S-wave, acquisition of combining digit geophone with simulation geophone, acquisition of 3D VSP seismic with ground seimic, acquisition of combining interborehole seismic,implementing synchron acceptance of aboveground equipment and downhole instrument, common use and synchron acceptance of 3D VSP and ground shots, synchron acquisition of high-density P-wave and high-density multi-wave, achieve high quality magnanimity seismic data. On the basis of detailedly analysising the simulation geophone data of high-density acquisition ,adopting pertinency processing technique to protecting amplitude,studying the justice matching of S/N and resolution to improving resolution of seismic profile ,using poststack series connection migration,prestack time migration and prestack depth migration to putting up high precision imaging,gained reliable high resolution data.At the same time carrying along high accuracy exploration to high-density digit geophone data, obtaining good improve in its resolution, fidelity, break point clear degree, interbed information, formation characteristics and so on.Comparing processing results ,we may see simulation geophone high-density acquisition and high precision imaging can enhancing resolution, high-density seismic basing on digit geophone can better solve subsurface geology problem. At the same time, fine processing converted wave of synchron acquisition and 3D VSP seismic data,acquiring good result. On the basis of high-density seismic data acquisition and high-density seismic data processing, carry through high precision structure interpretation and inversion, and preliminary interpretation analysis to 3D VSP seismic data and multi-wave multi-component seismic data. High precision interpretation indicates after high resolution processing ,structural diagram obtaining from high-density seismic data better accord with true geoligy situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elastic anisotropy is a very common phenomenon in the Earth’s interior, especial for sedimentary rock as important gas and oil reservoirs. But in the processing and interpretation of seismic data, it is assumption that the media in the Earth’s interior is completely elastic and isotropic, and then the methods based on isotropy are used to deal with anisotropic seismic data, so it makes the seismic resolution lower and the error on images is caused. The research on seismic wave simulation technology can improve our understanding on the rules of seismic wave propagation in anisotropic media, and it can help us to resolve problems caused by anisotropy of media in the processing and interpretation of seismic data. So researching on weakly anisotropic media with rotated axis of symmetry, we study systematically the rules of seismic wave propagation in this kind of media, simulate the process with numerical calculation, and get the better research results. The first-order ray tracing (FORT) formulas of qP wave derived can adapt to every anisotropic media with arbitrary symmetry. The equations are considerably simpler than the exact ray tracing equations. The equations allow qP waves to be treated independently from qS waves, just as in isotropic media. They simplify considerably in media with higher symmetry anisotropy. In isotropic media, they reduce to the exact ray tracing equations. In contrast to other perturbation techniques used to trace rays in weakly anisotropic media, our approach does not require calculation of reference rays in a reference isotropic medium. The FORT-method rays are obtained directly. They are computationally more effective than standard ray tracing equations. Moreover the second-order travel time corrections formula derived can be used to reduce effectively the travel time error, and improve the accuracy of travel time calculation. The tensor transformation equations of weak-anisotropy parameters in media with rotated axis of symmetry derived from the Bond transformation equations resolve effectively the problems of coordinate transformation caused by the difference between global system of coordinate and local system of coordinate. The calculated weak-anisotropy parameters are completely suitable to the first-order ray tracing used in this paper, and their forms are simpler than those from the Bond transformation. In the numerical simulation on ray tracing, we use the travel time table calculation method that the locations of the grids in the ray beam are determined, then the travel times of the grids are obtained by the reversed distance interpolation. We get better calculation efficiency and accuracy by this method. Finally we verify the validity and adaptability of this method used in this paper with numerical simulations for the rotated TI model with anisotropy of about 8% and the rotated ORTHO model with anisotropy of about 20%. The results indicate that this method has better accuracy for both media with different types and different anisotropic strength. Keywords: weak-anisotropy, numerical simulation, ray tracing equation, travel time, inhomogeneity

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The seismic data acquisition system is the most important equipment for seismic prospecting. The geophysicists have been paying high attention to the specification of the equipment used in seismic prospecting. Its specification and performance are of great concerned to acquire precisely and accurately seismic data, which show us stratum frame. But, by this time, limited by the technology, most of the Broad-band Seismic Recorder (BSR) for lithosphere research of our country were bought from fremdness which were very costliness and maintained discommodiously. So it is very important to study the seismic data acquisition system.The subject of the thesis is the research of the BSR, several items were included, such as: seismic data digitizer and its condition monitor design.In the first chapter, the author explained the significance of the implement of BSR, expatiated the requirement to the device and introduced the actuality of the BSR in our country.In the second chapter, the collectivity architecture of the BSR system was illustrated. Whereafter, the collectivity target and guideline of the performance of the system design were introduced. The difficulty of the system design and some key technology were analyzed, such as the Electro Magnetic Compatibility (EMC), system reliability technology and so on.In the third chapter, some design details of BSR were introduced. In the recorder, the former analog to digital converter (ADC) was separated from the later data transition module. According to the characteristic of seismic data acquisition system, a set high-resolution 24-bit ADC chip was chosen to the recorder design scheme. As the following part, the noise performance of the seismic data channel was analyzed.In the fourth chapter, the embedded software design of each board and the software design of the workstation were introduced. At the same time the communication protocol of the each module was recommendedAt the last part of this thesis, the advantages and the practicability of the BSR system design were summarized, and the next development items were suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of in situ measurements is essential in the validation and evaluation of the algorithms that provide coastal water quality data products from ocean colour satellite remote sensing. Over the past decade, various types of ocean colour algorithms have been developed to deal with the optical complexity of coastal waters. Yet there is a lack of a comprehensive intercomparison due to the availability of quality checked in situ databases. The CoastColour Round Robin (CCRR) project, funded by the European Space Agency (ESA), was designed to bring together three reference data sets using these to test algorithms and to assess their accuracy for retrieving water quality parameters. This paper provides a detailed description of these reference data sets, which include the Medium Resolution Imaging Spectrometer (MERIS) level 2 match-ups, in situ reflectance measurements, and synthetic data generated by a radiative transfer model (HydroLight). These data sets, representing mainly coastal waters, are available from doi:10.1594/PANGAEA.841950. The data sets mainly consist of 6484 marine reflectance (either multispectral or hyperspectral) associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: total suspended matter (TSM) and chlorophyll a (CHL) concentrations, and the absorption of coloured dissolved organic matter (CDOM). Inherent optical properties are also provided in the simulated data sets (5000 simulations) and from 3054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three data sets are compared. Match-up and in situ sites where deviations occur are identified. The distributions of the three reflectance data sets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.