917 resultados para d(x2-y2) is-wave superconductor


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a new approach to construct a 2-dimensional (2-D) directional filter bank (DFB) by cascading a 2-D nonseparable checkerboard-shaped filter pair and 2-D separable cosine modulated filter bank (CMFB). Similar to diagonal subbands in 2-D separable wavelets, most of the subbands in 2-D separable CMFBs, tensor products of two 1-D CMFBs, are poor in directional selectivity due to the fact that the frequency supports of most of the subband filters are concentrated along two different directions. To improve the directional selectivity, we propose a new DFB to realize the subband decomposition. First, a checkerboard-shaped filter pair is used to decompose an input image into two images containing different directional information of the original image. Next, a 2-D separable CMFB is applied to each of the two images for directional decomposition. The new DFB is easy in design and has merits: low redundancy ratio and fine directional-frequency tiling. As its application, the BLS-GSM algorithm for image denoising is extended to use the new DFBs. Experimental results show that the proposed DFB achieves better denoising performance than the methods using other DFBs for images of abundant textures. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diffusive transport properties in microscale convection flows are studied by using the direct simulation Monte Carlo method. The effective diffusion coefficient D is computed from the mean square displacements of simulated molecules based on the Einstein diffusion equation D = x2 t /2t. Two typical convection flows, namely, thermal creep convection and Rayleigh– Bénard convection, are investigated. The thermal creep convection in our simulation is in the noncontinuum regime, with the characteristic scale of the vortex varying from 1 to 100 molecular mean free paths. The diffusion is shown to be enhanced only when the vortex scale exceeds a certain critical value, while the diffusion is reduced when the vortex scale is less than the critical value. The reason for phenomenon of diffusion reduction in the noncontinuum regime is that the reduction effect due to solid wall is dominant while the enhancement effect due to convection is negligible. A molecule will lose its memory of macroscopic velocity when it collides with the walls, and thus molecules are hard to diffuse away if they are confined between very close walls. The Rayleigh– Bénard convection in our simulation is in the continuum regime, with the characteristic length of 1000 molecular mean free paths. Under such condition, the effect of solid wall on diffusion is negligible. The diffusion enhancement due to convection is shown to scale as the square root of the Péclet number in the steady convection regime, which is in agreement with previous theoretical and experimental results. In the oscillation convection regime, the diffusion is more strongly enhanced because the molecules can easily advect from one roll to its neighbor due to an oscillation mechanism. © 2010 American Institute of Physics. doi:10.1063/1.3528310

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The real-space recursion method and unrestricted Hartree-Fock approximation have been applied to calculate the density of states of various Co perovskite, CeCoO3, SrCoO3 and Sr1-xCexCoO3. We have studied the magnetically ordered states of these Co perovskites in an enlarged double cell, and find its various magnetic structures due to the occupancy of 3d band and its interaction with neighboring Co ions. In this study, we have studied the p-d hybridization of the three Co perovskites, we find t(2g) electrons are localized and the flat e(g) band is responsible for the itinerant behavior, and although the rare earth elements itself contribute little to the DOS at the Fermi energy, the DOS at Fermi energy and the magnetic moment changed consequently because of different valence of Co ions in these compounds and p-d hybridization effect is very important. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seismic technique is in the leading position for discovering oil and gas trap and searching for reserves throughout the course of oil and gas exploration. It needs high quality of seismic processed data, not only required exact spatial position, but also the true information of amplitude and AVO attribute and velocity. Acquisition footprint has an impact on highly precision and best quality of imaging and analysis of AVO attribute and velocity. Acquisition footprint is a new conception of describing seismic noise in 3-D exploration. It is not easy to understand the acquisition footprint. This paper begins with forward modeling seismic data from the simple sound wave model, then processes it and discusses the cause for producing the acquisition footprint. It agreed that the recording geometry is the main cause which leads to the distribution asymmetry of coverage and offset and azimuth in different grid cells. It summarizes the characters and description methods and analysis acquisition footprints influence on data geology interpretation and the analysis of seismic attribute and velocity. The data reconstruct based on Fourier transform is the main method at present for non uniform data interpolation and extrapolate, but this method always is an inverse problem with bad condition. Tikhonov regularization strategy which includes a priori information on class of solution in search can reduce the computation difficulty duo to discrete kernel condition disadvantage and scarcity of the number of observations. The method is quiet statistical, which does not require the selection of regularization parameter; and hence it has appropriate inversion coefficient. The result of programming and tentat-ive calculation verifies the acquisition footprint can be removed through prestack data reconstruct. This paper applies migration to the processing method of removing the acquisition footprint. The fundamental principle and algorithms are surveyed, seismic traces are weighted according to the area which occupied by seismic trace in different source-receiver distances. Adopting grid method in stead of accounting the area of Voroni map can reduce difficulty of calculation the weight. The result of processing the model data and actual seismic demonstrate, incorporating a weighting scheme based on the relative area that is associated with each input trace with respect to its neighbors acts to minimize the artifacts caused by irregular acquisition geometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the subject of oil and gas exploration, migration is an efficacious technique for imagining structures underground. Wave-equation migration (WEM) dominates over other migration methods in accuracy, despite of higher computational cost. However, the advantages of WEM will emerge as the progress of computer technology. WEM is sensitive to velocity model more than others. Small velocity perturbations result in grate divergence in the image pad. Currently, Kirrchhoff method is still very popular in the exploration industry for the reason of difficult to provide precise velocity model. It is very urgent to figure out a way to migration velocity modeling. This dissertation is mainly devoted to migration velocity analysis method for WEM: 1. In this dissertation, we cataloged wave equation prestack depth migration. The concept of migration is introduced. Then, the analysis is applied to different kinds of extrapolate operator to demonstrate their accuracy and applicability. We derived the DSR and SSR migration method and apply both to 2D model. 2. The output of prestack WEM is in form of common image gathers (CIGs). Angle domain common image gathers (ADCIGs) gained by wave equation are proved to be free of artifacts. They are also the most potential candidates for migration velocity analysis. We discussed how to get ADCIGs by DSR and SSR, and obtained ADCIGs before and after imaging separately. The quality of post stack image is affected by CIGs, only the focused or flattened CIGs generate the correct image. Based on wave equation migration, image could be enhanced by special measures. In this dissertation we use both prestack depth residual migration and time shift imaging condition to improve the image quality. 3. Inaccurate velocities lead to errors of imaging depth and curvature of coherent events in CIGs. The ultimate goal of migration velocity analysis (MVA) is to focus scattered event to correct depth and flatten curving event by updating velocities. The kinematic figures are implicitly presented by focus depth aberration and kinetic figure by amplitude. The initial model of Wave-equation migration velocity analysis (WEMVA) is the output of RMO velocity analysis. For integrity of MVA, we review RMO method in this dissertation. The dissertation discusses the general ideal of RMO velocity analysis for flat and dipping events and the corresponding velocity update formula. Migration velocity analysis is a very time consuming work. Respect to computational convenience, we discus how RMO works for synthetic source record migration. In some extremely situation, RMO method fails. Especially in the areas of poorly illuminated or steep structure, it is very difficult to obtain enough angle information for RMO. WEMVA based on wave extrapolate theory, which successfully overcome the drawback of ray based methods. WEMVA inverses residual velocities with residual images. Based on migration regression, we studied the linearized scattering operator and linearized residual image. The key to WEMVA is the linearized residual image. Residual image obtained by Prestack residual migration, which based on DSR is very inefficient. In this dissertation, we proposed obtaining residual migration by time shift image condition, so that, WEMVA could be implemented by SSR. It evidently reduce the computational cost for this method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At present the main object of the exploration and development (E&D) of oil and gas is not the structural oil-gas pools but the subtle lithological oil-gas reservoir. Since the last 90's, the ratio of this kind of pools in newly-added oil reserves is becoming larger and larger, so is the ratio in the eastern oilfields. The third oil-gas resource evaluation indicates the main exploration object of Jiyang depression is the lithological oil-gas pools in future. However, lack of effective methods that are applied to search for this kind of pool makes E&D difficult and the cost high. In view of the urgent demand of E&D, in this paper we deeply study and analyze the theory and application in which the seismic attributes are used to predict and describe lithological oil-gas reservoirs. The great results are obtained by making full use of abundant physics and reservoir information as well as the remarkable lateral continuity involved in seismic data in combination with well logging, drilling-well and geology. Based on a great deal of research and different geological features of Shengli oilfield, the great progresses are made some theories and methods of seismic reservoir prediction and description. Three kinds of extrapolation near well seismic wavelet methods-inverse distance interpolation, phase interpolation and pseudo well reflectivity-are improved; particularly, in sparse well area the method of getting pseudo well reflectivity is given by the application of the wavelet theory. The formulae for seismic attributes and coherent volumes are derived theoretically, and the optimal method of seismic attributes and improved algorithms of picking up coherent data volumes are put forward. The method of making sequence analysis on seismic data is put forward and derived in which the wavelet transform is used to analyze not only qualitatively but also quantitatively seismic characteristics of reservoirs. According to geologic model and seismic forward simulation, from macro to micro, the method of pre- and post-stack data synthetic analysis and application is put forward using seismic in close combination with geology; particularly, based on making full use of post-stack seismic data, "green food"-pre-stack seismic data is as possible as utilized. In this paper, the formative law and distributing characteristic of lithologic oil-gas pools of the Tertiary in Jiyang depression, the knowledge of geological geophysics and the feasibility of all sorts of seismic methods, and the applied knowledge of seismic data and the geophysical mechanism of oil-gas reservoirs are studied. Therefore a series of perfect seismic technique and software are completed that fit to E&D of different categories of lithologic oil-gas reservoirs. This achievement is different from other new seismic methods that are put forward in the recent years, that is multi-wave multi-component seismic, cross hole seismic, vertical seismic, and time-lapse seismic etc. that need the reacquisition of seismic data to predict and describe the oil-gas reservoir. The method in this paper is based on the conventional 2D/3D seismic data, so the cost falls sharply. In recent years this technique that predict and describe lithologic oil-gas reservoirs by seismic information has been applied in E&D of lithologic oil-gas reservoirs on glutenite fans in abrupt slop and turbidite fans in front of abrup slop, slump turbidite fans in front of delta, turbidite fans with channel in low slope and channel sanbody, and a encouraging geologic result has been gained. This achievement indicates that the application of seismic information is one of the most effective ways in solving the present problem of E&D. This technique is significant in the application and popularization, and positive on increasing reserves and raising production as well as stable development in Shengli oilfield. And it will be directive to E&D of some similar reservoirs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oil and scientific groups have been focusing on the 3D wave equation prestack depth migration since it can solve the complex problems of the geologic structure accurately and maintain the wave information, which is propitious to lithology imaging. The symplectic method was brought up by Feng Kang firstly in 1984 and became the hotspot of numerical computation study. It will be widely applied in many scientific field of necessity because of its great virtue in scientific sense. This paper combines the Symplectic method and the 3-D wave equation prestack depth migration to bring up an effectual numerical computation method of wave field extrapolatation technique under the scientific background mentioned above. At the base of deep analysis of computation method and the performance of PC cluster, a seismic prestack depth migration flow considering the virtue of both seismic migration method and Pc cluster has formatted. The software, named 3D Wave Equation Prestack Depth Migration of Symplectic Method, which is based on the flow, has been enrolled in the National Bureau of Copyright (No. 0013767). Dagang and Daqing Oil Field have now put it into use in the field data processing. In this paper, the one way wave equation operator is decompounded into a phase shift operator and a time shift operator and the correct item with high rank Symplectic method when approaching E exponent. After reviewing eliminating alias frequency of operator, computing the maximum angle of migration and the imaging condition, we present the test result of impulse response of the Symplectic method. Taking the imaging results of the SEG/EAGE salt and overthrust models for example and seeing about the imaging ability with complex geologic structure of our software system, the paper has discussed the effect of the selection of imaging parameters and the effectuation on the migration result of the seismic wavelet and compared the 2-D and 3-D prestack depth migration result of the salt mode. We also present the test result of impulse response with the overthrust model. The imaging result of the two international models indicates that the Symplectic method of 3-D prestack depth migration accommodates great transversal velocity variation and complex geologic structure. The huge computing cost is the key obstruction that 3-D prestack depth migration wave equation cannot be adopted by oil industry. After deep analysis of prestack depth migration flow and the character of PC cluster ,the paper put forward :i)parallel algorithms in shot and frequency domain of the common shot gather 3-D wave equation prestack migration; ii)the optimized setting scheme of breakpoint in field data processing; iii)dynamic and static load balance among the nodes of the PC cluster in the 3-D prestack depth migration. It has been proven that computation periods of the 3-D prestack depth migration imaging are greatly shortened given that adopting the computing method mentioned in the paper. In addition,considering the 3-D wave equation prestack depth migration flow in complex medium and examples of the field data processing, the paper put the emphasis on: i)seismic data relative preprocessing, ii) 2.5D prestack depth migration velocity analysis, iii)3D prestack depth migration. The result of field data processing shows satisfied application ability of the flow put forward in the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projeto de Ps-Graduao/Dissertao apresentado Universidade Fernando Pessoa como parte dos requisitos para obteno do grau de Mestre em Cincias Farmacuticas

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Breastfeeding is known to confer benefits, both in the short term and long term, to the child and also to the mother. Various health-promotion initiatives have aimed to increase breastfeeding rates and duration in the United Kingdom over the past decade. In order to assist in these endeavours, it is essential to understand the reasons why women decide whether to breastfeed and the factors that influence the duration of breastfeeding. This study reports breastfeeding initiation and duration rates of mothers participating in the Growth, Learning and Development study undertaken by the Child Health & Welfare Recognised Research Group. Although this study cannot provide prevalence data for all mothers in Greater Belfast, it can provide useful information on trends within particular groups of the population. In addition, it examines maternally reported reasons for choosing to breastfeed and for breastfeeding cessation. The likelihood of mothers initiating breastfeeding is influenced by factors such as increased age, higher educational attainment and higher socio-economic grouping. The most common reason cited for breastfeeding is that it is best for baby. Returning to work is the most important factor in influencing whether mothers continued to breastfeed. Women report different reasons for cessation depending on the age of their child when they stopped breastfeeding. This information should inform health-promotion initiatives and interventions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent evidence suggests that the conjunction fallacy observed in people's probabilistic reasoning is also to be found in their evaluations of inductive argument strength. We presented 130 participants with materials likely to produce a conjunction fallacy either by virtue of a shared categorical or a causal relationship between the categories in the argument. We also took a measure of participants' cognitive ability. We observed conjunction fallacies overall with both sets of materials but found an association with ability for the categorical materials only. Our results have implications for accounts of individual differences in reasoning, for the relevance theory of induction, and for the recent claim that causal knowledge is important in inductive reasoning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the effects of amplitude and phase damping decoherence in d-dimensional one-way quantum computation. We focus our attention on low dimensions and elementary unidimensional cluster state resources. Our investigation shows how information transfer and entangling gate simulations are affected for d >= 2. To understand motivations for extending the one-way model to higher dimensions, we describe how basic qudit cluster states deteriorate under environmental noise of experimental interest. In order to protect quantum information from the environment, we consider encoding logical qubits into qudits and compare entangled pairs of linear qubit-cluster states to single qudit clusters of equal length and total dimension. A significant reduction in the performance of cluster state resources for d > 2 is found when Markovian-type decoherence models are present.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional Time Division Multiple Access (TDMA) protocol provides deterministic periodic collision free data transmissions. However, TDMA lacks flexibility and exhibits low efficiency in dynamic environments such as wireless LANs. On the other hand contention-based MAC protocols such as the IEEE 802.11 DCF are adaptive to network dynamics but are generally inefficient in heavily loaded or large networks. To take advantage of the both types of protocols, a D-CVDMA protocol is proposed. It is based on the k-round elimination contention (k-EC) scheme, which provides fast contention resolution for Wireless LANs. D-CVDMA uses a contention mechanism to achieve TDMA-like collision-free data transmissions, which does not need to reserve time slots for forthcoming transmissions. These features make the D-CVDMA robust and adaptive to network dynamics such as node leaving and joining, changes in packet size and arrival rate, which in turn make it suitable for the delivery of hybrid traffic including multimedia and data content. Analyses and simulations demonstrate that D-CVDMA outperforms the IEEE 802.11 DCF and k-EC in terms of network throughput, delay, jitter, and fairness.