963 resultados para DIVERGENCE
Resumo:
给出了自主移动机器人定位的两种算法:解析算法和数值算法。解析法公式较以往的简洁。数值算法结合解析法和高斯-牛顿算法,不仅能避免因初值选取不合理而导致求解过程发散的问题,而且能提高运算精度和速度,通过对两种算法的计算机仿真,表明了解析算法具有运算速度快,而数值算法精度高的特点。其结果已用于自主移动机器人的研制中。
Resumo:
This dissertation starts from the point that the prestack time migration can been considered as an approximation of the prestack depth migration, giving a wave equation based prestack time migration approach. The new approach includes: analytically getting the travel time and amplitude based on the one way wave equation and the stationary-phase theory, using ‘spread’ imaging method and imaging following the prestack depth migration, updating the velocity model with respect to the flats of the events in CRP gathers. Based on this approach, we present a scheme that can image land seismic data without field static correction. We may determine the correct near surface velocities and stack velocities by picking up the residual correction of the events in the CRP gathers. We may get the rational migration section based on the updated velocities and correct the migration section from a floating datum plane to a universal datum plane. We may adaptively determine the migration aperture according to the dips of the imaging structures. This not only speed up the processing, but may suppress the migration noise produce by the extra aperture. We adopt the deconvolution imaging condition of wave equation migration. It may partially compensate the geometric divergence. In this scheme, we use the table-driven technique which may enhance the computational efficiency. If the subsurface is much more complicated, it may be impossible to distinguish the DTS curve. To solve this problem, we proposed a technique to determine the appropriate range of the DTS curve. We synthesize DTS panel in this range using different velocities and depths, and stack the amplitude around the zero time. Determine the correct velocity and location of the considered grid point by comparing the values.
Resumo:
At present, in order to image complex structures more accurately, the seismic migration methods has been developed from isotropic media to the anisotropic media. This dissertation develops a prestack time migration algorithm and application aspects for complex structures systematically. In transversely isotropic media with a vertical symmetry axis (VTI media), the dissertation starts from the theory that the prestack time migration is an approximation of the prestack depth migration, based on the one way wave equation and VTI time migration dispersion relation, by combining the stationary-phase theory gives a wave equation based VTI prestack time migration algorithm. Based on this algorithm, we can analytically obtain the travel time and amplitude expression in VTI media, as while conclude how the anisotropic parameter influence the time migration, and by analyzing the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, we can update the velocity model and estimate the anisotropic parameter model through the time migration. When anisotropic parameter is zero, this algorithm degenerates to the isotropic time migration algorithm naturally, so we can propose an isotopic processing procedure for imaging. This procedure may keep the main character of time migration such as high computational efficiency and velocity estimation through the migration, and, additionally, partially compensate the geometric divergence by adopting the deconvolution imaging condition of wave equation migration. Application of this algorithm to the complicated synthetic dataset and field data demonstrates the effectiveness of the approach. In the dissertation we also present an approach for estimating the velocity model and anisotropic parameter model. After analyzing the velocity and anisotropic parameter impaction on the time migration, and based on the normal moveout of the far offset seismic data and lateral inhomogeneity of velocity, through migration we can update the velocity model and estimate the anisotropic parameter model by combining the advantages of velocity analysis in isotropic media and anisotropic parameter estimation in VTI media. Testing on the synthetic and field data, demonstrates the method is effective and very steady. Massive synthetic dataset、2D sea dataset and 3D field datasets are used for VTI prestack time migration and compared to the stacked section after NMO and prestack isotropic time migration stacked section to demonstrate that VTI prestack time migration method in this paper can obtain better focusing and less positioning errors of complicated dip reflectors. When subsurface is more complex, primaries and multiples could not be separated in the Radon domain because they can no longer be described with simple functions (parabolic). We propose an attenuating multiple method in the image domain to resolve this problem. For a given velocity model,since time migration takes the complex structures wavefield propagation in to account, primaries and multiples have different offset-domain moveout discrepancies, then can be separated using techniques similar to the prior migration with Radon transform. Since every individual offset-domain common-reflection point gather incorporates complex 3D propagation effects, our method has the advantage of working with 3D data and complicated geology. Testing on synthetic and real data, we demonstrate the power of the method in discriminating between primaries and multiples after prestack time migration, and multiples can be attenuated in the image space considerably.
Resumo:
On the subject of oil and gas exploration, migration is an efficacious technique for imagining structures underground. Wave-equation migration (WEM) dominates over other migration methods in accuracy, despite of higher computational cost. However, the advantages of WEM will emerge as the progress of computer technology. WEM is sensitive to velocity model more than others. Small velocity perturbations result in grate divergence in the image pad. Currently, Kirrchhoff method is still very popular in the exploration industry for the reason of difficult to provide precise velocity model. It is very urgent to figure out a way to migration velocity modeling. This dissertation is mainly devoted to migration velocity analysis method for WEM: 1. In this dissertation, we cataloged wave equation prestack depth migration. The concept of migration is introduced. Then, the analysis is applied to different kinds of extrapolate operator to demonstrate their accuracy and applicability. We derived the DSR and SSR migration method and apply both to 2D model. 2. The output of prestack WEM is in form of common image gathers (CIGs). Angle domain common image gathers (ADCIGs) gained by wave equation are proved to be free of artifacts. They are also the most potential candidates for migration velocity analysis. We discussed how to get ADCIGs by DSR and SSR, and obtained ADCIGs before and after imaging separately. The quality of post stack image is affected by CIGs, only the focused or flattened CIGs generate the correct image. Based on wave equation migration, image could be enhanced by special measures. In this dissertation we use both prestack depth residual migration and time shift imaging condition to improve the image quality. 3. Inaccurate velocities lead to errors of imaging depth and curvature of coherent events in CIGs. The ultimate goal of migration velocity analysis (MVA) is to focus scattered event to correct depth and flatten curving event by updating velocities. The kinematic figures are implicitly presented by focus depth aberration and kinetic figure by amplitude. The initial model of Wave-equation migration velocity analysis (WEMVA) is the output of RMO velocity analysis. For integrity of MVA, we review RMO method in this dissertation. The dissertation discusses the general ideal of RMO velocity analysis for flat and dipping events and the corresponding velocity update formula. Migration velocity analysis is a very time consuming work. Respect to computational convenience, we discus how RMO works for synthetic source record migration. In some extremely situation, RMO method fails. Especially in the areas of poorly illuminated or steep structure, it is very difficult to obtain enough angle information for RMO. WEMVA based on wave extrapolate theory, which successfully overcome the drawback of ray based methods. WEMVA inverses residual velocities with residual images. Based on migration regression, we studied the linearized scattering operator and linearized residual image. The key to WEMVA is the linearized residual image. Residual image obtained by Prestack residual migration, which based on DSR is very inefficient. In this dissertation, we proposed obtaining residual migration by time shift image condition, so that, WEMVA could be implemented by SSR. It evidently reduce the computational cost for this method.
Resumo:
The CSAMT method is playing an important role in the exploration of geothermal and the pre-exploration in tunnel construction project recently. In order to instruct the interpretation technique for the field data, the forward method from ID to 3D and inversion method in ID and 2D are developed in this paper for the artificial source magnetotelluric in frequency domain. In general, the artificial source data are inverted only after the near field is corrected on the basis of the assumption of half-homogeneous space; however, this method is not suitable for the complex structure because the assumption is not valid any more. Recently the new idea about inversion scheme without near field correction is published in order to avoid the near field correction error. We try to discuss different inversion scheme in ID and 2D using the data without near field correction.The numerical integration method is used to do the forward modeling in ID CSAMT method o The infinite line source is used in the 2D finite-element forward modeling, where the near-field effect is occurred as in the CSAMT method because of using artificial source. The pseudo-delta function is used to modeling the source distribution, which reduces the singularity when solving the finite-element equations. The effect on the exploration area is discussed when anomalous body exists under the source or between the source and exploration area; A series of digital test show the 2D finite element method are correct, the results of modeling has important significant for CSAMT data interpretation. For 3D finite-element forward modeling, the finite-element equation is derived by Galerkin method and the divergence condition is add forcedly to the forward equation, the forward modeling result of the half homogeneous space model is correct.The new inversion idea without near field correction is followed to develop new inversion methods in ID and 2D in the paper. All of the inversion schemes use the data without near field correction, which avoid introducing errors caused by near field correction. The modified grid parameter method and the layer-by-layer inversion method are joined in the ID inversion scheme. The RRI method with artificial source are developed and finite-element inversion method are used in 2D inversion scheme. The inversion results using digital data and the field data are accordant to the model and the known geology data separately, which means the inversion without near field correction is accessible. The feasibility to invert the data only in exploration area is discussed when the anomalous body exists between the source and the exploration area.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
Resumo:
Roberts, O. (2006). Developing the untapped wealth of Britain's ?Celtic fringe?: water engineering and the Welsh landscape, 1870-1960. Landscape Research. 31(2), pp.121-133. RAE2008
Resumo:
Wood, Ian; Hieber, M., (2007) 'The Dirichlet problem in convex bounded domains for operators with L8-coefficients', Differential and Integral Equations 20 pp.721-734 RAE2008
Resumo:
Douglas, Robert; Cullen, M.J.P.; Roulston, I.; Sewell, M.J., (2005) 'Generalized semi-geostrophic theory on a sphere', Journal of Fluid Mechanics 531 pp.123-157 RAE2008
Resumo:
Wydział Biologii: Instytut Biologii Molekularnej i Biotechnologii
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently adopt a set of local policies that specify which routes it accepts and advertises from/to other networks, as well as which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme (APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially-conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts (policy conflict-avoidance vs. -control mode), each AS dynamically adjusts its own path preferences—increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the substability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, routing load, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
The Border Gateway Protocol (BGP) is the current inter-domain routing protocol used to exchange reachability information between Autonomous Systems (ASes) in the Internet. BGP supports policy-based routing which allows each AS to independently define a set of local policies on which routes it accepts and advertises from/to other networks, as well as on which route it prefers when more than one route becomes available. However, independently chosen local policies may cause global conflicts, which result in protocol divergence. In this paper, we propose a new algorithm, called Adaptive Policy Management Scheme(APMS), to resolve policy conflicts in a distributed manner. Akin to distributed feedback control systems, each AS independently classifies the state of the network as either conflict-free or potentially conflicting by observing its local history only (namely, route flaps). Based on the degree of measured conflicts, each AS dynamically adjusts its own path preferences---increasing its preference for observably stable paths over flapping paths. APMS also includes a mechanism to distinguish route flaps due to topology changes, so as not to confuse them with those due to policy conflicts. A correctness and convergence analysis of APMS based on the sub-stability property of chosen paths is presented. Implementation in the SSF network simulator is performed, and simulation results for different performance metrics are presented. The metrics capture the dynamic performance (in terms of instantaneous throughput, delay, etc.) of APMS and other competing solutions, thus exposing the often neglected aspects of performance.
Resumo:
Strategic reviews of the Irish Food and Beverage Industry have consistently emphasised the need for food and beverage firms to improve their innovation and marketing capabilities, in order to maintain competitiveness in both domestic and overseas markets. In particular, the functional food and beverages market has been singled out as an extremely important emerging market, which Irish firms could benefit from through an increased technological and market orientation. Although health and wellness have been the most significant drivers of new product development (NPD) in recent years, failure rates for new functional foods and beverages have been reportedly high. In that context, researchers in the US, UK, Denmark and Ireland have reported a marked divergence between NPD practices within food and beverage firms and normative advice for successful product development. The high reported failure rates for new functional foods and beverages suggest a failure to manage customer knowledge effectively, as well as a lack of knowledge management between functional disciplines involved in the NPD process. This research explored the concept of managing customer knowledge at the early stages of the NPD process, and applied it to the development of a range of functional beverages, through the use of advanced concept optimisation research techniques, which provided for a more market-oriented approach to new food product development. A sequential exploratory research design strategy using mixed research methods was chosen for this study. First, the qualitative element of this research investigated customers’ choice motives for orange juice and soft drinks, and explored their attitudes and perceptions towards a range of new functional beverage concepts through a combination of 15 in-depth interviews and 3 focus groups. Second, the quantitative element of this research consisted of 3 conjoint-based questionnaires administered to 400 different customers in each study in order to model their purchase preferences for chilled nutrient-enriched and probiotic orange juices, and stimulant soft drinks. The in-depth interviews identified the key product design attributes that influenced customers’ choice motives for orange juice. The focus group discussions revealed that groups of customers were negative towards the addition of certain functional ingredients to natural foods and beverages. K-means cluster analysis was used to quantitatively identify segments of customers with similar preferences for chilled nutrient-enriched and probiotic orange juices, and stimulant soft drinks. Overall, advanced concept optimisation research methods facilitate the integration of the customer at the early stages of the NPD process, which promotes a multi-disciplinary approach to new food product design. This research illustrated how advanced concept optimisation research methods could contribute towards effective and efficient knowledge management in the new food product development process.
Resumo:
Phages belonging to the 936 group represent one of the most prevalent and frequently isolated phages in dairy fermentation processes using Lactococcus lactis as the primary starter culture. In recent years extensive research has been carried out to characterise this phage group at a genomic level in an effort to understand how the 936 group phages dominate this particular niche and cause regular problems during large scale milk fermentations. This thesis describes a large scale screening of industrial whey samples, leading to the isolation of forty three genetically different lactococcal phages. Using multiplex PCR, all phages were identified as members of the 936 group. The complete genome of thirty eight of these phages was determined using next generation sequencing technologies which identified several regions of divergence. These included the structural region surrounding the major tail protein, the replication region as well as the genes involved in phage DNA packing. For a number of phages the latter genomic region was found to harbour genes encoding putative orphan methyltransferases. Using small molecule real time (SMRT) sequencing and heterologous gene expression, the target motifs for several of these MTases were determined and subsequently shown to actively protect phage DNA from restriction endonuclease activity. Comparative analysis of the thirty eight phages with fifty two previously sequenced members of this group showed that the core genome consists of 28 genes, while the non-core genome was found to fluctuate irrespective of geographical location or time of isolation. This study highlights the continued need to perform large scale characterisation of the bacteriophage populations infecting industrial fermentation facilities in effort to further our understanding dairy phages and ways to control their proliferation.