870 resultados para one step estimation
Resumo:
Large earthquakes, such as the Chile earthquake in 1960 and the Sumatra-Andaman earthquake on Dec 26, 2004 in Indonesia, have generated the Earth’s free oscillations. The eigenfrequencies of the Earth’s free oscillations are closely related to the Earth’s internal structures. The conventional methods, which mainly focus on calculating the eigenfrequecies by analytical ways, and the analysis on observations can not easily study the whole processes from earthquake occurrence to the Earth’s free oscillation inspired. Therefore, we try to use numerical method incorporated with large-scale parallel computing to study on the Earth’s free oscillations excited by giant earthquakes. We first give a review of researches and developments of the Earth’s free oscillation, and basical theories under spherical coordinate system. We then give a review of the numerical simulation of seismic wave propagation and basical theories of spectral element method to simulate global seismic wave propagation. As a first step to study the Earth’s free oscillations, we use a finite element method to simulate the propagation of elastic waves and the generation of oscillations of the chime bell of Marquis Yi of Zeng, by striking different parts of the bell, which possesses the oval crosssection. The bronze chime bells of Marquis Yi of Zeng are precious cultural relics of China. The bells have a two-tone acoustic characteristic, i.e., striking different parts of the bell generates different tones. By analysis of the vibration in the bell and the spectrum analysis, we further help the understanding of the mechanism of two-tone acoustic characteristics of the chime bell of Marquis Yi of Zeng. The preliminary calculations have clearly shown that two different modes of oscillation can be generated by striking different parts of the bell, and indicate that finite element numerical simulation of the processes of wave propagation and two-tone generation of the chime bell of Marquis Yi of Zeng is feasible. These analyses provide a new quantitative and visual way to explain the mystery of the two-tone acoustic characteristics. The method suggested by this study can be applied to simulate free oscillations excited by great earthquakes with complex Earth structure. Taking into account of such large-scale structure of the Earth, small-scale low-precision numerical simulation can not simply meet the requirement. The increasing capacity in high-performance parallel computing and progress on fully numerical solutions for seismic wave fields in realistic three-dimensional spherical models, Spectral element method and high-performance parallel computing were incorporated to simulate the seismic wave propagation processes in the Earth’s interior, without the effects of the Earth’s gravitational potential. The numerical simulation shows that, the results of the toroidal modes of our calculation agree well with the theoretical values, although the accuracy of our results is much limited, the calculated peaks are little distorted due to three-dimensional effects. There exist much great differences between our calculated values of spheroidal modes and theoretical values, because we don’t consider the effect the Earth’ gravitation in numerical model, which leads our values are smaller than the theoretical values. When , is much smaller, the effect of the Earth’s gravitation make the periods of spheroidal modes become shorter. However, we now can not consider effects of the Earth’s gravitational potential into the numerical model to simulate the spheroidal oscillations, but those results still demonstrate that, the numerical simulation of the Earth’s free oscillation is very feasible. We make the numerical simulation on processes of the Earth’s free oscillations under spherically symmetric Earth model using different special source mechanisms. The results quantitatively show that Earth’s free oscillations excited by different earthquakes are different, and oscillations at different locations are different for free oscillation excited by the same earthquake. We also explore how the Earth’s medium attenuation will take effects on the Earth’s free oscillations, and take comparisons with the observations. The medium attenuation can make influences on the Earth’s free oscillations, though the effects on lower-frequency fundamental oscillations are weak. At last, taking 2008 Wenchuan earthquake for example, we employ spectral element method incorporated with large-scale parallel computing technology to investigate the characteristics of seismic wave propagation excited by Wenchuan earthquake. We calculate synthetic seismograms with one-point source model and three-point source model respectively. Full 3-D visualization of the numerical results displays the profile of the seismic wave propagation with respect to time. The three-point source, which was proposed by the latest investigations through field observation and reverse estimation, can better demonstrate the spatial and temporal characteristics of the source rupture processes than one-point source. Primary results show that those synthetic signals calculated from three-point source agree well with the observations. This can further reveal that the source rupturing process of Wenchuan earthquake is a multi-rupture process, which is composed by at least three or more stages of rupture processes. In conclusion, the numerical simulation can not only solve some problems concluding the Earth’s ellipticity and anisotropy, which can be easily solved by conventional methods, but also finally solve the problems concluding topography model and lateral heterogeneity. We will try to find a way to fully implement self-gravitation in spectral element method in future, and do our best to continue researching the Earth’s free oscillations using the numerical simulations to see how the Earth’ lateral heterogeneous will affect the Earth’s free oscillations. These will make it possible to bring modal spectral data increasingly to bear on furthering our understanding of the Earth’s three-dimensional structure.
Resumo:
On 70~(th) SEG Annual meeting, many author have announced their result on the wave equation prestack depth migration. The methods of the wave-field imaging base on wave equation becomes mature and the main direction of seismic imaging. The direction of imaging the complex media has been the main one of the projects that the national "85" and "95" reservoir geophysics key projects and "Knowledge innovation key project of Chinese Academy of Science" have been supported. Furthermore, we began the study for special oil field situation of our nation with the international research groups. Under the background, the author combined the thoughts of symplectic with wave equation pre-stack depth migration, and develops and efficient wave equation pre-stack depth migration method. The purpose of this work is to find out a way to imaging the complex geological goals of Chinese oilfields and form a procedure of seismic data processing. The paper gives the approximation of one way wave equation operator, and shows the numerical results. The comparisons have been made between split-step phase method, Kirchhoff and Ray+FD methods on the pulse response, simple model and Marmousi model. The results shows that the method in this paper has an higher accuracy. Four field data examples have also be given in this paper. The results of field data demonstrate that the method can be usable. The velocity estimation is an important part of the wave equation pre-stack depth migration. A parallel velocity estimation program has been written and tested on the Beowulf clusters. The program can establish a velocity profile automatically. An example on Marmousi model has shown in the third part of the paper to demonstrate the method. Another field data was also given in the paper. Beowulf cluster is the converge of the high performance computer architecture. Today, Beowulf Cluster is a good choice for institutes and small companies to finish their task. The paper gives some comparison results the computation of the wave equation pre-stack migration on Beowulf cluster, IBM-SP2 (24 nodes) in Daqing and Shuguang 3000, and the comparison of their prize. The results show that the Beowulf cluster is an efficient way to finish the large amount computation of the wave equation pre-stack depth migration, especially for 3D.
Resumo:
On 70~(th) SEG Annual meeting, many author have announced their result on the wave equation pre-stack depth migration. The methods of the wave-field imaging base on wave equation becomes mature and the main direction of seismic imaging. The direction of imaging the complex media has been the main one of the projects that the national "85" and "95" reservoir geophysics key projects and "Knowledge innovation key project of Chinese Academy of Science" have been supported. Furthermore, we began the study for special oil field situation of our nation with the international research groups. Under the background, the author combined the thoughts of symplectic with wave equation pre-stack depth migration, and develops and efficient wave equation pre-stack depth migration method. The purpose of this work is to find out a way to imaging the complex geological goals of Chinese oilfields and form a procedure of seismic data processing. The paper gives the approximation of one way wave equation operator, and shows the numerical results. The comparisons have been made between split-step phase method, Kirchhoff and Ray+FD methods on the pulse response, simple model and Marmousi model. The result shows that the method in this paper has an higher accuracy. Four field data examples have also be given in this paper. The results of field data demonstrate that the method can be usable. The velocity estimation is an important part of the wave equation pre-stack depth migration. A. parallel velocity estimation program has been written and tested on the Beowulf clusters. The program can establish a velocity profile automatically. An example on Marmousi model has shown in the third part of the paper to demonstrate the method. Another field data was also given in the paper. Beowulf cluster is the converge of the high performance computer architecture. Today, Beowulf Cluster is a good choice for institutes and small companies to finish their task. The paper gives some comparison results the computation of the wave equation pre-stack migration on Beowulf cluster, IBM-SP2 (24 nodes) in Daqing and Shuguang3000, and the comparison of their prize. The results show that the Beowulf cluster is an efficient way to finish the large amount computation of the wave equation pre-stack depth migration, especially for 3D.
Resumo:
Stochastic reservoir modeling is a technique used in reservoir describing. Through this technique, multiple data sources with different scales can be integrated into the reservoir model and its uncertainty can be conveyed to researchers and supervisors. Stochastic reservoir modeling, for its digital models, its changeable scales, its honoring known information and data and its conveying uncertainty in models, provides a mathematical framework or platform for researchers to integrate multiple data sources and information with different scales into their prediction models. As a fresher method, stochastic reservoir modeling is on the upswing. Based on related works, this paper, starting with Markov property in reservoir, illustrates how to constitute spatial models for catalogued variables and continuum variables by use of Markov random fields. In order to explore reservoir properties, researchers should study the properties of rocks embedded in reservoirs. Apart from methods used in laboratories, geophysical means and subsequent interpretations may be the main sources for information and data used in petroleum exploration and exploitation. How to build a model for flow simulations based on incomplete information is to predict the spatial distributions of different reservoir variables. Considering data source, digital extent and methods, reservoir modeling can be catalogued into four sorts: reservoir sedimentology based method, reservoir seismic prediction, kriging and stochastic reservoir modeling. The application of Markov chain models in the analogue of sedimentary strata is introduced in the third of the paper. The concept of Markov chain model, N-step transition probability matrix, stationary distribution, the estimation of transition probability matrix, the testing of Markov property, 2 means for organizing sections-method based on equal intervals and based on rock facies, embedded Markov matrix, semi-Markov chain model, hidden Markov chain model, etc, are presented in this part. Based on 1-D Markov chain model, conditional 1-D Markov chain model is discussed in the fourth part. By extending 1-D Markov chain model to 2-D, 3-D situations, conditional 2-D, 3-D Markov chain models are presented. This part also discusses the estimation of vertical transition probability, lateral transition probability and the initialization of the top boundary. Corresponding digital models are used to specify, or testify related discussions. The fifth part, based on the fourth part and the application of MRF in image analysis, discusses MRF based method to simulate the spatial distribution of catalogued reservoir variables. In the part, the probability of a special catalogued variable mass, the definition of energy function for catalogued variable mass as a Markov random field, Strauss model, estimation of components in energy function are presented. Corresponding digital models are used to specify, or testify, related discussions. As for the simulation of the spatial distribution of continuum reservoir variables, the sixth part mainly explores 2 methods. The first is pure GMRF based method. Related contents include GMRF model and its neighborhood, parameters estimation, and MCMC iteration method. A digital example illustrates the corresponding method. The second is two-stage models method. Based on the results of catalogued variables distribution simulation, this method, taking GMRF as the prior distribution for continuum variables, taking the relationship between catalogued variables such as rock facies, continuum variables such as porosity, permeability, fluid saturation, can bring a series of stochastic images for the spatial distribution of continuum variables. Integrating multiple data sources into the reservoir model is one of the merits of stochastic reservoir modeling. After discussing how to model spatial distributions of catalogued reservoir variables, continuum reservoir variables, the paper explores how to combine conceptual depositional models, well logs, cores, seismic attributes production history.
Resumo:
The Bifurcation Interpreter is a computer program that autonomously explores the steady-state orbits of one-parameter families of periodically- driven oscillators. To report its findings, the Interpreter generates schematic diagrams and English text descriptions similar to those appearing in the science and engineering research literature. Given a system of equations as input, the Interpreter uses symbolic algebra to automatically generate numerical procedures that simulate the system. The Interpreter incorporates knowledge about dynamical systems theory, which it uses to guide the simulations, to interpret the results, and to minimize the effects of numerical error.
Resumo:
We describe a new method for motion estimation and 3D reconstruction from stereo image sequences obtained by a stereo rig moving through a rigid world. We show that given two stereo pairs one can compute the motion of the stereo rig directly from the image derivatives (spatial and temporal). Correspondences are not required. One can then use the images from both pairs combined to compute a dense depth map. The motion estimates between stereo pairs enable us to combine depth maps from all the pairs in the sequence to form an extended scene reconstruction and we show results from a real image sequence. The motion computation is a linear least squares computation using all the pixels in the image. Areas with little or no contrast are implicitly weighted less so one does not have to explicitly apply a confidence measure.
Resumo:
An investigation in innovation management and entrepreneurial management is conducted in this thesis. The aim of the research is to explore changes of innovation styles in the transformation process from a start-up company to a more mature phase of business, to predict in a second step future sustainability and the probability of success. As businesses grow in revenue, corporate size and functional complexity, various triggers, supporters and drivers affect innovation and company's success. In a comprehensive study more than 200 innovative and technology driven companies have been examined and compared to identify patterns in different performance levels. All of them have been founded under the same formal requirements of the Munich Business Plan Competition -a research approach which allowed a unique snapshot that only long-term studies would be able to provide. The general objective was to identify the correlation between different factors, as well as different dimensions, to incremental and radical innovations realised. The 12 hypothesis were formed to prove have been derived from a comprehensive literature review. The relevant academic and practitioner literature on entrepreneurial, innovation, and knowledge management as well as social network theory revealed that the concept of innovation has evolved significantly over the last decade. A review of over 15 innovation models/frameworks contributed to understand what innovation in context means and what the dimensions are. It appears that the complex theories of innovation can be described by the increasing extent of social ingredients in the explanation of innovativeness. Originally based on tangible forms of capital, and on the necessity of pull and technology push, innovation management is today integrated in a larger system. Therefore, two research instruments have been developed to explore the changes in innovations styles. The Innovation Management Audits (IMA Start-up and IMA Mature) provided statements related to product/service development, innovativeness in various typologies, resources for innovations, innovation capabilities in conjunction to knowledge and management, social networks as well as the measurement of outcomes to generate high-quality data for further exploration. In obtaining results the mature companies have been clustered in the performance level low, average and high, while the start-up companies have been kept as one cluster. Firstly, the analysis exposed that knowledge, the process of acquiring knowledge, interorganisational networks and resources for innovations are the most important driving factors for innovation and success. Secondly, the actual change of the innovation style provides new insights about the importance of focusing on sustaining success and innovation ii 16 key areas. Thirdly, a detailed overview of triggers, supporters and drivers for innovation and success for each dimension support decision makers in putting their company in the right direction. Fourthly, a critical review of contemporary strategic management in conjunction to the findings provides recommendation of how to apply well-known management tools. Last but not least, the Munich cluster is analysed providing an estimation of the success probability of the different performance cluster and start-up companies. For the analysis of the probability of success of the newly developed as well as statistically and qualitative validated ICP Model (Innovativeness, Capabilities & Potential) has been developed and applied. While the model was primarily developed to evaluate the probability of success of companies; it has equal application in the situation to measure innovativeness to identify the impact of various strategic initiatives within small or large enterprises. The main findings of the model are that competitor, and customer orientation and acquiring knowledge important for incremental and radical innovation. Formal and interorganisation networks are important to foster innovation but informal networks appear to be detrimental to innovation. The testing of the ICP model h the long term is recommended as one subject of further research. Another is to investigate some of the more intangible aspects of innovation management such as attitude and motivation of mangers. IV
Resumo:
Q. Shen and R. Jensen, 'Approximation-based feature selection and application for algae population estimation,' Applied Intelligence, vol. 28, no. 2, pp. 167-181, 2008. Sponsorship: EPSRC RONO: EP/E058388/1
Resumo:
Infantolino, B., Gales, D., Winter, S., Challis, J., The validity of ultrasound estimation of muscle volumes, Journal of applied biomechanics, ISSN 1065-8483, Vol. 23, N?. 3, 2007 , pags. 213-217 RAE2008
Resumo:
A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.
Resumo:
Object detection is challenging when the object class exhibits large within-class variations. In this work, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly learned in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. Detector training can be accomplished via standard SVM learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the foreground parameters are provided in training, the detectors can also produce parameter estimate. When the foreground object masks are provided in training, the detectors can also produce object segmentation. The advantages of our method over past methods are demonstrated on data sets of human hands and vehicles.
Resumo:
Many real world image analysis problems, such as face recognition and hand pose estimation, involve recognizing a large number of classes of objects or shapes. Large margin methods, such as AdaBoost and Support Vector Machines (SVMs), often provide competitive accuracy rates, but at the cost of evaluating a large number of binary classifiers, thus making it difficult to apply such methods when thousands or millions of classes need to be recognized. This thesis proposes a filter-and-refine framework, whereby, given a test pattern, a small number of candidate classes can be identified efficiently at the filter step, and computationally expensive large margin classifiers are used to evaluate these candidates at the refine step. Two different filtering methods are proposed, ClassMap and OVA-VS (One-vs.-All classification using Vector Search). ClassMap is an embedding-based method, works for both boosted classifiers and SVMs, and tends to map the patterns and their associated classes close to each other in a vector space. OVA-VS maps OVA classifiers and test patterns to vectors based on the weights and outputs of weak classifiers of the boosting scheme. At runtime, finding the strongest-responding OVA classifier becomes a classical vector search problem, where well-known methods can be used to gain efficiency. In our experiments, the proposed methods achieve significant speed-ups, in some cases up to two orders of magnitude, compared to exhaustive evaluation of all OVA classifiers. This was achieved in hand pose recognition and face recognition systems where the number of classes ranges from 535 to 48,600.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.
Resumo:
Standard structure from motion algorithms recover 3D structure of points. If a surface representation is desired, for example a piece-wise planar representation, then a two-step procedure typically follows: in the first step the plane-membership of points is first determined manually, and in a subsequent step planes are fitted to the sets of points thus determined, and their parameters are recovered. This paper presents an approach for automatically segmenting planar structures from a sequence of images, and simultaneously estimating their parameters. In the proposed approach the plane-membership of points is determined automatically, and the planar structure parameters are recovered directly in the algorithm rather than indirectly in a post-processing stage. Simulated and real experimental results show the efficacy of this approach.