883 resultados para Minimization of open stack problem
Resumo:
This thesis mainly talks about the wavelet transfrom and the frequency division method. It describes the frequency division processing on prestack or post-stack seismic data and application of inversion noise attenuation, frequency division residual static correction and high resolution data in reservoir inversion. This thesis not only describes the frequency division and inversion in theory, but also proves it by model calculation. All the methods are integrated together. The actual data processing demonstrates the applying results. This thesis analyzes the differences and limitation between t-x prediction filter and f-x prediction filter noise attenuation from wavelet transform theory. It considers that we can do the frequency division attenuation process of noise and signal by wavelet frequency division theory according to the differences of noise and signal in phase, amplitude and frequency. By comparison with the f-x coherence noise, removal method, it approves the effects and practicability of frequency division in coherence and random noise isolation. In order to solve the side effects in non-noise area, we: take the area constraint method and only apply the frequency division processing in the noise area. So it can solve the problem of low frequency loss in non-noise area. The residual moveout differences in seismic data processing have a great effect on stack image and resolutions. Different frequency components have different residual moveout differences. The frequency division residual static correction realizes the frequency division and the calculation of residual correction magnitude. It also solves the problems of different residual correction magnitude in different frequency and protects the high frequency information in data. By actual data processing, we can get good results in phase residual moveout differences elimination of pre-stack data, stack image quality and improvement of data resolution. This thesis analyses the characters of the random noises and its descriptions in time domain and frequency domain. Furthermore it gives the inversion prediction solution methods and realizes the frequency division inversion attenuation of the random noise. By the analysis of results of the actual data processing, we show that the noise removed by inversion has its own advantages. By analyzing parameter's about resolution and technology of high resolution data processing, this thesis describes the relations between frequency domain and resolution, parameters about resolution and methods to increase resolution. It also gives the processing flows of the high resolution data; the effect and influence of reservoir inversion caused by high resolution data. Finally it proves the accuracy and precision of the reservoir inversion results. The research results of this thesis reveal that frequency division noise attenuation, frequency residual correction and inversion noise attenuation are effective methods to increase the SNR and resolution of seismic data.
Resumo:
We consider the problem of matching model and sensory data features in the presence of geometric uncertainty, for the purpose of object localization and identification. The problem is to construct sets of model feature and sensory data feature pairs that are geometrically consistent given that there is uncertainty in the geometry of the sensory data features. If there is no geometric uncertainty, polynomial-time algorithms are possible for feature matching, yet these approaches can fail when there is uncertainty in the geometry of data features. Existing matching and recognition techniques which account for the geometric uncertainty in features either cannot guarantee finding a correct solution, or can construct geometrically consistent sets of feature pairs yet have worst case exponential complexity in terms of the number of features. The major new contribution of this work is to demonstrate a polynomial-time algorithm for constructing sets of geometrically consistent feature pairs given uncertainty in the geometry of the data features. We show that under a certain model of geometric uncertainty the feature matching problem in the presence of uncertainty is of polynomial complexity. This has important theoretical implications by demonstrating an upper bound on the complexity of the matching problem, an by offering insight into the nature of the matching problem itself. These insights prove useful in the solution to the matching problem in higher dimensional cases as well, such as matching three-dimensional models to either two or three-dimensional sensory data. The approach is based on an analysis of the space of feasible transformation parameters. This paper outlines the mathematical basis for the method, and describes the implementation of an algorithm for the procedure. Experiments demonstrating the method are reported.
Resumo:
The motion planning problem is of central importance to the fields of robotics, spatial planning, and automated design. In robotics we are interested in the automatic synthesis of robot motions, given high-level specifications of tasks and geometric models of the robot and obstacles. The Mover's problem is to find a continuous, collision-free path for a moving object through an environment containing obstacles. We present an implemented algorithm for the classical formulation of the three-dimensional Mover's problem: given an arbitrary rigid polyhedral moving object P with three translational and three rotational degrees of freedom, find a continuous, collision-free path taking P from some initial configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for the full six degree of freedom Movers' problem. The algorithm transforms the six degree of freedom planning problem into a point navigation problem in a six-dimensional configuration space (called C-Space). The C-Space obstacles, which characterize the physically unachievable configurations, are directly represented by six-dimensional manifolds whose boundaries are five dimensional C-surfaces. By characterizing these surfaces and their intersections, collision-free paths may be found by the closure of three operators which (i) slide along 5-dimensional intersections of level C-Space obstacles; (ii) slide along 1- to 4-dimensional intersections of level C-surfaces; and (iii) jump between 6 dimensional obstacles. Implementing the point navigation operators requires solving fundamental representational and algorithmic questions: we will derive new structural properties of the C-Space constraints and shoe how to construct and represent C-Surfaces and their intersection manifolds. A definition and new theoretical results are presented for a six-dimensional C-Space extension of the generalized Voronoi diagram, called the C-Voronoi diagram, whose structure we relate to the C-surface intersection manifolds. The representations and algorithms we develop impact many geometric planning problems, and extend to Cartesian manipulators with six degrees of freedom.
Resumo:
Developing learning, teaching and assessment strategies that foster ongoing engagement and provide inspiration to academic staff is a particular challenge. This paper demonstrates how an institutional learning, teaching and assessment strategy was developed and a ‘dynamic’ strategy created in order to achieve the ongoing enhancement of the quality of the student learning experience. The authors use the discussion of the evolution, development and launch of the Strategy and underpinning Resource Bank to reflect on the hopes and intentions behind the approach; firstly the paper will discuss the collaborative and iterative approach taken to the development of an institutional learning, teaching and assessment strategy; and secondly, the development of open access educational resources to underpin the strategy. The paper then outlines staff engagement with the resource bank and positive outcomes which have been identified to date, identifies the next steps in achieving the ambition behind the strategy and outlines the action research and fuller evaluation which will be used to monitor progress and ensure responsive learning at institutional level.
Resumo:
Urquhart, C., Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Fenton, R. & Armstrong, C. (2004). JUSTEIS: JISC Usage Surveys: Trends in Electronic Information Services Final report 2003/2004 Cycle Five. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: JISC
Resumo:
Spink, S., Urquhart, C., Cox, A. & Higher Education Academy - Information and Computer Sciences Subject Centre. (2007). Procurement of electronic content across the UK National Health Service and Higher Education sectors. Report to JISC executive and LKDN executive. Sponsorship: JISC/LKDN
Resumo:
M.Hieber, I.Wood: The Dirichlet problem in convex bounded domains for operators with L^\infty-coefficients, Diff. Int. Eq., 20, 7 (2007),721-734.
Resumo:
Wood, Ian; Hieber, M., (2007) 'The Dirichlet problem in convex bounded domains for operators with L8-coefficients', Differential and Integral Equations 20 pp.721-734 RAE2008
Resumo:
Similarly to protein folding, the association of two proteins is driven by a free energy funnel, determined by favorable interactions in some neighborhood of the native state. We describe a docking method based on stochastic global minimization of funnel-shaped energy functions in the space of rigid body motions (SE(3)) while accounting for flexibility of the interface side chains. The method, called semi-definite programming-based underestimation (SDU), employs a general quadratic function to underestimate a set of local energy minima and uses the resulting underestimator to bias further sampling. While SDU effectively minimizes functions with funnel-shaped basins, its application to docking in the rotational and translational space SE(3) is not straightforward due to the geometry of that space. We introduce a strategy that uses separate independent variables for side-chain optimization, center-to-center distance of the two proteins, and five angular descriptors of the relative orientations of the molecules. The removal of the center-to-center distance turns out to vastly improve the efficiency of the search, because the five-dimensional space now exhibits a well-behaved energy surface suitable for underestimation. This algorithm explores the free energy surface spanned by encounter complexes that correspond to local free energy minima and shows similarity to the model of macromolecular association that proceeds through a series of collisions. Results for standard protein docking benchmarks establish that in this space the free energy landscape is a funnel in a reasonably broad neighborhood of the native state and that the SDU strategy can generate docking predictions with less than 5 � ligand interface Ca root-mean-square deviation while achieving an approximately 20-fold efficiency gain compared to Monte Carlo methods.
Resumo:
In many multi-camera vision systems the effect of camera locations on the task-specific quality of service is ignored. Researchers in Computational Geometry have proposed elegant solutions for some sensor location problem classes. Unfortunately, these solutions utilize unrealistic assumptions about the cameras' capabilities that make these algorithms unsuitable for many real-world computer vision applications: unlimited field of view, infinite depth of field, and/or infinite servo precision and speed. In this paper, the general camera placement problem is first defined with assumptions that are more consistent with the capabilities of real-world cameras. The region to be observed by cameras may be volumetric, static or dynamic, and may include holes that are caused, for instance, by columns or furniture in a room that can occlude potential camera views. A subclass of this general problem can be formulated in terms of planar regions that are typical of building floorplans. Given a floorplan to be observed, the problem is then to efficiently compute a camera layout such that certain task-specific constraints are met. A solution to this problem is obtained via binary optimization over a discrete problem space. In preliminary experiments the performance of the resulting system is demonstrated with different real floorplans.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is then achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2-D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The warping templates are computed at the first frame of the sequence. Illumination templates are precomputed off-line over a training set of face images collected under varying lighting conditions. Experiments in tracking are reported.
Resumo:
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain’s form and motion systems that address such situations. Because the model’s stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse “feature tracking signals” from, e.g., line ends, are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and longrange cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.
Resumo:
The core of this thesis is the study of NATO’s Comprehensive Approach strategy to state building in Afghanistan between 2006 and 2011. It argues that this strategy sustained operational and tactical practices which were ineffective in responding to the evolved nature of the security problem. The thesis interrogates the Comprehensive Approach along ontological, empirical and epistemological lines and concludes that the failure of the Comprehensive Approach in the specific Afghan case is, in fact, indicative of underlying theoretical and pragmatic flaws which, therefore, generalize the dilemma. The research is pragmatic in nature, employing mixed methods (quantitative and qualitative) concurrently. Qualitative methods include research into primary and secondary literature sources supplemented with the author’s personal experiences in Afghanistan in 2008 and various NATO HQ and Canadian settings. Quantitative research includes an empirical case study focussing on NATO’s Afghan experience and its attempt at state building between 2006 and 2011. This study incorporates a historical review of NATO’s evolutionary involvement in Afghanistan incorporating the subject timeframe; offers an analysis of human development and governance related data mapped to expected outcomes of the Afghan National Development Strategy and NATO’s comprehensive campaign design; and interrogates the Comprehensive Approach strategy by means of an analysis of conceptual, institutional and capability gaps in the context of an integrated investigational framework. The results of the case study leads to an investigation of a series of research questions related to the potential impact of the failure of the Comprehensive Approach for NATO in Afghanistan and the limits of state building as a means of attaining security for the Alliance.
Resumo:
This work illustrates the influence of wind forecast errors on system costs, wind curtailment and generator dispatch in a system with high wind penetration. Realistic wind forecasts of different specified accuracy levels are created using an auto-regressive moving average model and these are then used in the creation of day-ahead unit commitment schedules. The schedules are generated for a model of the 2020 Irish electricity system with 33% wind penetration using both stochastic and deterministic approaches. Improvements in wind forecast accuracy are demonstrated to deliver: (i) clear savings in total system costs for deterministic and, to a lesser extent, stochastic scheduling; (ii) a decrease in the level of wind curtailment, with close agreement between stochastic and deterministic scheduling; and (iii) a decrease in the dispatch of open cycle gas turbine generation, evident with deterministic, and to a lesser extent, with stochastic scheduling.
Resumo:
During the summer of 1994, Archaeology in Annapolis conducted archaeological investigations of the city block bounded by Franklin, South and Cathedral Streets in the city of Annapolis. This Phase III excavation was conducted as a means to identify subsurface cultural resources in the impact area associated with the proposed construction of the Anne Arundel County Courthouse addition. This impact area included both the upper and lower parking lots used by Courthouse employees. Investigations were conducted in the form of mechanical trenching and hand excavated units. Excavations in the upper lot area yielded significant information concerning the interior area of the block. Known as Bellis Court, this series of rowhouses was constructed in the late nineteenth century and was used as rental properties by African-Americans. The dwellings remained until the middle of the twentieth century when they were demolished in preparation for the construction of a Courthouse addition. Portions of the foundation of a house owned by William H. Bellis in the 1870s were also exposed in this area. Construction of this house was begun by William Nicholson around 1730 and completed by Daniel Dulany in 1732/33. It was demolished in 1896 by James Munroe, a Trustee for Bellis. Excavations in the upper lot also revealed the remains of a late seventeenth/early eighteenth century wood-lined cellar, believed to be part of the earliest known structure on Lot 58. After an initially rapid deposition of fill around 1828, this cellar was gradually covered with soil throughout the remainder of the nineteenth century. The fill deposit in the cellar feature yielded a mixed assemblage of artifacts that included sherds of early materials such as North Devon gravel-tempered earthenware, North Devon sgraffito and Northem Italian slipware, along with creamware, pearlware and whiteware. In the lower parking lot, numerous artifacts were recovered from yard scatter associated with the houses that at one time fronted along Cathedral Street and were occupied by African- Americans. An assemblage of late seventeenth century/early eighteenth century materials and several slag deposits from an early forge were recovered from this second area of study. The materials associated with the forge, including portions of a crucible, provided evidence of some of the earliest industry in Annapolis. Investigations in both the upper and lower parking lots added to the knowledge of the changing landscape within the project area, including a prevalence of open space in early periods, a surprising survival of impermanent structures, and a gradual regrading and filling of the block with houses and interior courts. Excavations at the Anne Arundel County Courthouse proved this to be a multi-component site, rich in cultural resources from Annapolis' Early Settlement Period through its Modern Period (as specified by Maryland's Comprehensive Historic Preservation Plan (Weissman 1986)). This report provides detailed interpretations of the archaeological findings of these Phase III investigations.