826 resultados para Population set-based methods
Resumo:
We present a fixed-grid finite element technique for fluid-structure interaction problems involving incompressible viscous flows and thin structures. The flow equations are discretised with isoparametric b-spline basis functions defined on a logically Cartesian grid. In addition, the previously proposed subdivision-stabilisation technique is used to ensure inf-sup stability. The beam equations are discretised with b-splines and the shell equations with subdivision basis functions, both leading to a rotation-free formulation. The interface conditions between the fluid and the structure are enforced with the Nitsche technique. The resulting coupled system of equations is solved with a Dirichlet-Robin partitioning scheme, and the fluid equations are solved with a pressure-correction method. Auxiliary techniques employed for improving numerical robustness include the level-set based implicit representation of the structure interface on the fluid grid, a cut-cell integration algorithm based on marching tetrahedra and the conservative data transfer between the fluid and structure discretisations. A number of verification and validation examples, primarily motivated by animal locomotion in air or water, demonstrate the robustness and efficiency of our approach. © 2013 John Wiley & Sons, Ltd.
Resumo:
In this paper we present a robust face location system based on human vision simulations to automatically locate faces in color static images. Our method is divided into four stages. In the first stage we use a gauss low-pass filter to remove the fine information of images, which is useless in the initial stage of human vision. During the second and the third stages, our technique approximately detects the image regions, which may contain faces. During the fourth stage, the existence of faces in the selected regions is verified. Having combined the advantages of Bottom-Up Feature Based Methods and Appearance-Based Methods, our algorithm performs well in various images, including those with highly complex backgrounds.
Resumo:
基于Stewart平台的六维力传感器具有结构紧凑、刚度大、量程宽等特点,它在工业机器人、空间站对接等领域具有广泛的应用前景。好的标定方法是正确使用传感器的基础。由于基于Stewart平台的六维力传感器是一个复杂的非线性系统,所以采用常规的线性标定方法必将带来较大的标定误差从而影响其使用性能。标定的实质是,由测量值空间到理论值空间的映射函数的确定过程。由函数逼近理论可知,当只在已知点集上给出函数值时,可用多项式或分段多项式等较简单函数逼近待定函数。基于上述思想,本文将整个测量空间划分为若干连续的子测量空间,再对每个子空间进行线性标定,从而提高了整个测量系统的标定精度。实验分析结果表明了该标定方法有效。
Resumo:
Halfgraben-like depressions have multiple layers of subtle traps, multiple coverings of oil-bearing series and multiple types of reservoirs. But these reservoirs have features of strong concealment and are difficult to explore. For this reason, many scholars contribute efforts to study the pool-forming mechanism for this kind of basins, and establish the basis for reservoir exploration and development. However, further study is needed. This paper takes HuiMin depression as an example to study the pool-forming model for the gentle slope belts of fault-depression lake basins. Applying multi-discipline theory, methods and technologies including sedimentary geology, structural geology, log geology, seismic geology, rock mechanics and fluid mechanics, and furthermore applying the dynamo-static data of oil reservoir and computer means in maximum limitation, this paper, qualitatively and quantitatively studies the depositional system, structural framework, structural evolution, structural lithofacies and tectonic stress field, as well as fluid potential field, sealing and opening properties of controlling-oil faults and reservoir prediction, finally presents a pool-forming model, and develops a series of methods and technologies suited to the reservoir prediction of the gentle slope belt. The results obtained in this paper richen the pool-forming theory of a complex oil-gas accumulative area in the gentle slope belt of a continental fault-depression basin. The research work begins with the study of geometric shape of fracture system, then the structural form, activity stages and time-space juxtaposition of faults with different level and different quality are investigated. On the basis of study of the burial history, subsidence history and structural evolution history, this paper synthesizes the studied results of deposition system, analyses the structural lithofacies of the gentle slope belt in the HuiMing Depression and its controlling roles to oil reservoir in the different structural lithofacies belts in time-space, and presents their evolution patterns. The study of structural stress field and fluid potential field indicates that the stress field has a great change from the Dong Ying stages to nowadays. One marked point among them is that the Dong Ying double peak- shaped nose structures usually were the favorable directional area for oil and gas migration, while the QuDi horst became favorable directional area since the GuanTao stage. Based on the active regular of fractures and the information of crude oil saturation pressure, this paper firstly demonstrates that the pool-forming stages of the LingNan field were prior to the stages of the QuDi field, whici provides new eyereach and thinking for hydrocarbon exploration in the gentle slope belt. The BeiQiao-RenFeng buried hill belt is a high value area with the maximum stress values from beginning to end, thus it is a favorable directional area for oil and gas migration. The opening and sealing properties of fractures are studied. The results obtained demonstrate their difference in the hydrocarbon pool formation. The seal abilities relate not only with the quality, direction and scale of normal stress, with the interface between the rocks of two sides of a fault and with the shale smear factor (SSF), but they relate also with the juxtaposition of fault motion stage and hydrocarbon migration. In the HuiMin gentle slope belt, the fault seal has difference both in different stages, and in different location and depth in the same stage. The seal extent also displays much difference. Therefore, the fault seal has time-space difference. On the basis of study of fault seal history, together with the obtained achievement of structural stress field and fluid potential field, it is discovered that for the pool-forming process of oil and gas in the studied area the fault seal of nowadays is better than that of the Ed and Ng stages, it plays an important role to determine the oil column height and hydrocarbon preservation. However, the fault seal of the Ed and Ng stages has an important influence for the distribution state of oil and gas. Because the influential parameters are complicated and undefined, we adopt SSF in the research work. It well reflects synthetic effect of each parameter which influences fault seal. On the basis of the above studies, three systems of hydrocarbon migration and accumulation, as well as a pool-forming model are established for the gentle slope belt of the HuiMin depression, which can be applied for the prediction of regular patterns of oil-gas migration. Under guidance of the pool-forming geological model for the HuiMin slope belt, and taking seismic facies technology, log constraint evolution technology, pattern recognition of multiple parameter reservoir and discrimination technology of oil-bearing ability, this paper develops a set of methods and technologies suited to oil reservoir prediction of the gentle slope belt. Good economic benefit has been obtained.
Resumo:
Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly becme prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends a recently developed method for locality-sensitive hashing, which finds approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions; we show how to find the set of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.
Resumo:
Lee M.H., Model-Based Reasoning: A Principled Approach for Software Engineering, Software - Concepts and Tools,19(4), pp179-189, 2000.
Resumo:
R. Jensen and Q. Shen. Fuzzy-Rough Sets Assisted Attribute Selection. IEEE Transactions on Fuzzy Systems, vol. 15, no. 1, pp. 73-89, 2007.
Resumo:
R. Jensen and Q. Shen. Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough Based Approaches. IEEE Transactions on Knowledge and Data Engineering, 16(12): 1457-1471. 2004.
Resumo:
R. Jensen and Q. Shen, 'Fuzzy-Rough Attribute Reduction with Application to Web Categorization,' Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.
Resumo:
Q. Shen and R. Jensen, 'Selecting Informative Features with Fuzzy-Rough Sets and its Application for Complex Systems Monitoring,' Pattern Recognition, vol. 37, no. 7, pp. 1351-1363, 2004.
Resumo:
R. Daly and Q. Shen. Methods to accelerate the learning of bayesian network structures. Proceedings of the Proceedings of the 2007 UK Workshop on Computational Intelligence.
Resumo:
We consider the problem of architecting a reliable content delivery system across an overlay network using TCP connections as the transport primitive. We first argue that natural designs based on store-and-forward principles that tightly couple TCP connections at intermediate end-systems impose fundamental performance limitations, such as dragging down all transfer rates in the system to the rate of the slowest receiver. In contrast, the ROMA architecture we propose incorporates the use of loosely coupled TCP connections together with fast forward error correction techniques to deliver a scalable solution that better accommodates a set of heterogeneous receivers. The methods we develop establish chains of TCP connections, whose expected performance we analyze through equation-based methods. We validate our analytical findings and evaluate the performance of our ROMA architecture using a prototype implementation via extensive Internet experimentation across the PlanetLab distributed testbed.
Resumo:
The increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies. However the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet. We argue that the distributions of packet features (IP addresses and ports) observed in flow traces reveals both the presence and the structure of a wide range of anomalies. Using entropy as a summarization tool, we show that the analysis of feature distributions leads to significant advances on two fronts: (1) it enables highly sensitive detection of a wide range of anomalies, augmenting detections by volume-based methods, and (2) it enables automatic classification of anomalies via unsupervised learning. We show that using feature distributions, anomalies naturally fall into distinct and meaningful clusters. These clusters can be used to automatically classify anomalies and to uncover new anomaly types. We validate our claims on data from two backbone networks (Abilene and Geant) and conclude that feature distributions show promise as a key element of a fairly general network anomaly diagnosis framework.
Resumo:
The data streaming model provides an attractive framework for one-pass summarization of massive data sets at a single observation point. However, in an environment where multiple data streams arrive at a set of distributed observation points, sketches must be computed remotely and then must be aggregated through a hierarchy before queries may be conducted. As a result, many sketch-based methods for the single stream case do not apply directly, as either the error introduced becomes large, or because the methods assume that the streams are non-overlapping. These limitations hinder the application of these techniques to practical problems in network traffic monitoring and aggregation in sensor networks. To address this, we develop a general framework for evaluating and enabling robust computation of duplicate-sensitive aggregate functions (e.g., SUM and QUANTILE), over data produced by distributed sources. We instantiate our approach by augmenting the Count-Min and Quantile-Digest sketches to apply in this distributed setting, and analyze their performance. We conclude with experimental evaluation to validate our analysis.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.