897 resultados para Optical pattern recognition -- Mathematical models
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.
Resumo:
Self-compacting concrete (SCC) is generally designed with a relatively higher content of finer, which includes cement, and dosage of superplasticizer than the conventional concrete. The design of the current SCC leads to high compressive strength, which is already used in special applications, where the high cost of materials can be tolerated. Using SCC, which eliminates the need for vibration, leads to increased speed of casting and thus reduces labour requirement, energy consumption, construction time, and cost of equipment. In order to obtain and gain maximum benefit from SCC it has to be used for wider applications. The cost of materials will be decreased by reducing the cement content and using a minimum amount of admixtures. This paper reviews statistical models obtained from a factorial design which was carried out to determine the influence of four key parameters on filling ability, passing ability, segregation and compressive strength. These parameters are important for the successful development of medium strength self-compacting concrete (MS-SCC). The parameters considered in the study were the contents of cement and pulverised fuel ash (PFA), water-to-powder ratio (W/P), and dosage of superplasticizer (SP). The responses of the derived statistical models are slump flow, fluidity loss, rheological parameters, Orimet time, V-funnel time, L-box, JRing combined to Orimet, JRing combined to cone, fresh segregation, and compressive strength at 7, 28 and 90 days. The models are valid for mixes made with 0.38 to 0.72 W/P ratio, 60 to 216 kg/m3 of cement content, 183 to 317 kg/m3 of PFA and 0 to 1% of SP, by mass of powder. The utility of such models to optimize concrete mixes to achieve good balance between filling ability, passing ability, segregation, compressive strength, and cost is discussed. Examples highlighting the usefulness of the models are presented using isoresponse surfaces to demonstrate single and coupled effects of mix parameters on slump flow, loss of fluidity, flow resistance, segregation, JRing combined to Orimet, and compressive strength at 7 and 28 days. Cost analysis is carried out to show trade-offs between cost of materials and specified consistency levels and compressive strength at 7 and 28 days that can be used to identify economic mixes. The paper establishes the usefulness of the mathematical models as a tool to facilitate the test protocol required to optimise medium strength SCC.
Resumo:
Recently, several belief negotiation models have been introduced to deal with the problem of belief merging. A negotiation model usually consists of two functions: a negotiation function and a weakening function. A negotiation function is defined to choose the weakest sources and these sources will weaken their point of view using a weakening function. However, the currently available belief negotiation models are based on classical logic, which makes them difficult to define weakening functions. In this paper, we define a prioritized belief negotiation model in the framework of possibilistic logic. The priority between formulae provides us with important information to decide which beliefs should be discarded. The problem of merging uncertain information from different sources is then solved by two steps. First, beliefs in the original knowledge bases will be weakened to resolve inconsistencies among them. This step is based on a prioritized belief negotiation model. Second, the knowledge bases obtained by the first step are combined using a conjunctive operator which may have a reinforcement effect in possibilistic logic.
Resumo:
A novel methodology is proposed for the development of neural network models for complex engineering systems exhibiting nonlinearity. This method performs neural network modeling by first establishing some fundamental nonlinear functions from a priori engineering knowledge, which are then constructed and coded into appropriate chromosome representations. Given a suitable fitness function, using evolutionary approaches such as genetic algorithms, a population of chromosomes evolves for a certain number of generations to finally produce a neural network model best fitting the system data. The objective is to improve the transparency of the neural networks, i.e. to produce physically meaningful
Resumo:
Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.
Resumo:
In this paper, a novel pattern recognition scheme, global harmonic subspace analysis (GHSA), is developed for face recognition. In the proposed scheme, global harmonic features are extracted at the semantic scale to capture the 2-D semantic spatial structures of a face image. Laplacian Eigenmap is applied to discriminate faces in their global harmonic subspace. Experimental results on the Yale and PIE face databases show that the proposed GHSA scheme achieves an improvement in face recognition accuracy when compared with conventional subspace approaches, and a further investigation shows that the proposed GHSA scheme has impressive robustness to noise.
Resumo:
This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.
Resumo:
We study the predictability of a theoretical model for earthquakes, using a pattern recognition algorithm similar to the CN and M8 algorithms known in seismology. The model, which is a stochastic spring-block model with both global correlation and local interaction, becomes more predictable as the strength of the global correlation or the local interaction is increased.
Resumo:
Gabor features have been recognized as one of the most successful face representations. Encouraged by the results given by this approach, other kind of facial representations based on Steerable Gaussian first order kernels and Harris corner detector are proposed in this paper. In order to reduce the high dimensional feature space, PCA and LDA techniques are employed. Once the features have been extracted, AdaBoost learning algorithm is used to select and combine the most representative features. The experimental results on XM2VTS database show an encouraging recognition rate, showing an important improvement with respect to face descriptors only based on Gabor filters.
Resumo:
We address the problem of non-linearity in 2D Shape modelling of a particular articulated object: the human body. This issue is partially resolved by applying a different Point Distribution Model (PDM) depending on the viewpoint. The remaining non-linearity is solved by using Gaussian Mixture Models (GMM). A dynamic-based clustering is proposed and carried out in the Pose Eigenspace. A fundamental question when clustering is to determine the optimal number of clusters. From our point of view, the main aspect to be evaluated is the mean gaussianity. This partitioning is then used to fit a GMM to each one of the view-based PDM, derived from a database of Silhouettes and Skeletons. Dynamic correspondences are then obtained between gaussian models of the 4 mixtures. Finally, we compare this approach with other two methods we previously developed to cope with non-linearity: Nearest Neighbor (NN) Classifier and Independent Component Analysis (ICA).
Resumo:
In human motion analysis, the joint estimation of appearance, body pose and location parameters is not always tractable due to its huge computational cost. In this paper, we propose a Rao-Blackwellized Particle Filter for addressing the problem of human pose estimation and tracking. The advantage of the proposed approach is that Rao-Blackwellization allows the state variables to be splitted into two sets, being one of them analytically calculated from the posterior probability of the remaining ones. This procedure reduces the dimensionality of the Particle Filter, thus requiring fewer particles to achieve a similar tracking performance. In this manner, location and size over the image are obtained stochastically using colour and motion clues, whereas body pose is solved analytically applying learned human Point Distribution Models.
Resumo:
In this paper, we present a Statistical Shape Model for Human Figure Segmentation in gait sequences. Point Distribution Models (PDM) generally use Principal Component analysis (PCA) to describe the main directions of variation in the training set. However, PCA assumes a number of restrictions on the data that do not always hold. In this work, we explore the potential of Independent Component Analysis (ICA) as an alternative shape decomposition to the PDM-based Human Figure Segmentation. The shape model obtained enables accurate estimation of human figures despite segmentation errors in the input silhouettes and has really good convergence qualities.
Resumo:
In this paper, we show how interacting and occluding targets can be tackled successfully within a Gaussian approximation. For that purpose, we develop a general expansion of the mean and covariance of the posterior and we consider a first order approximation of it. The proposed method differs from EKF in that neither a non-linear dynamical model nor a non-linear measurement vector to state relation have to be defined, so it works with any kind of interaction potential and likelihood. The approach has been tested on three sequences (10400, 2500, and 400 frames each one). The results show that our approach helps to reduce the number of failures without increasing too much the computation time with respect to methods that do not take into account target interactions.