997 resultados para Murphy’s combination rule


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As people have unique tastes, the way to satisfy a small group of targeted customers or to be generic to meet most people's preference has been a traditional question to many fashion designers and website developers. This study examined the relationship between individuals' personality differences and their web design preferences. Each individual's personality is represented by a combination of five traits, and 15 website design-related features are considered to test the users' preference. We introduced a data mining technique called targeted positive and negative association rule mining to analyze a dataset containing the survey results collected from undergraduate students. The results of this study not only suggest the importance of providing specific designs to attract individual customers, but also provide valuable input on the Big Five personality traits in their entirety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent work has revealed multiple pathways for cross-orientation suppression in cat and human vision. In particular, ipsiocular and interocular pathways appear to assert their influence before binocular summation in human but have different (1) spatial tuning, (2) temporal dependencies, and (3) adaptation after-effects. Here we use mask components that fall outside the excitatory passband of the detecting mechanism to investigate the rules for pooling multiple mask components within these pathways. We measured psychophysical contrast masking functions for vertical 1 cycle/deg sine-wave gratings in the presence of left or right oblique (645 deg) 3 cycles/deg mask gratings with contrast C%, or a plaid made from their sum, where each component (i) had contrast 0.5Ci%. Masks and targets were presented to two eyes (binocular), one eye (monoptic), or different eyes (dichoptic). Binocular-masking functions superimposed when plotted against C, but in the monoptic and dichoptic conditions, the grating produced slightly more suppression than the plaid when Ci $ 16%. We tested contrast gain control models involving two types of contrast combination on the denominator: (1) spatial pooling of the mask after a local nonlinearity (to calculate either root mean square contrast or energy) and (2) "linear suppression" (Holmes & Meese, 2004, Journal of Vision 4, 1080–1089), involving the linear sum of the mask component contrasts. Monoptic and dichoptic masking were typically better fit by the spatial pooling models, but binocular masking was not: it demanded strict linear summation of the Michelson contrast across mask orientation. Another scheme, in which suppressive pooling followed compressive contrast responses to the mask components (e.g., oriented cortical cells), was ruled out by all of our data. We conclude that the different processes that underlie monoptic and dichoptic masking use the same type of contrast pooling within their respective suppressive fields, but the effects do not sum to predict the binocular case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Young children engage in a constant process of negotiating and constructing rules, utilizing these rules as cultural resources to manage their social interactions. This paper examines how young children make sense of, and also construct, rules within one early childhood classroom. This paper draws on a recent study conducted in Australia, in which video-recorded episodes of young children’s talk-in-interaction were examined. Analysis revealed four interactional practices that the children used, including manipulating materials and places to claim ownership of resources within the play space; developing or using pre-existing rules and social orders to control the interactions of their peers; strategically using language to regulate the actions of those around them; and creating and using membership categories such as ‘car owner’ or ‘team member’ to include or exclude others and also to control and participate in the unfolding interaction. While the classroom setting was framed within adult conceptions and regulations, analysis of the children’s interaction demonstrated their co-constructions of social order and imposition of their own forms of rules. Young children negotiated both adult constructed social order and also their own peer constructed social order, drawing upon various rules within both social orders as cultural resources by which they managed their interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For certain continuum problems, it is desirable and beneficial to combine two different methods together in order to exploit their advantages while evading their disadvantages. In this paper, a bridging transition algorithm is developed for the combination of the meshfree method (MM) with the finite element method (FEM). In this coupled method, the meshfree method is used in the sub-domain where the MM is required to obtain high accuracy, and the finite element method is employed in other sub-domains where FEM is required to improve the computational efficiency. The MM domain and the FEM domain are connected by a transition (bridging) region. A modified variational formulation and the Lagrange multiplier method are used to ensure the compatibility of displacements and their gradients. To improve the computational efficiency and reduce the meshing cost in the transition region, regularly distributed transition particles, which are independent of either the meshfree nodes or the FE nodes, can be inserted into the transition region. The newly developed coupled method is applied to the stress analysis of 2D solids and structures in order to investigate its’ performance and study parameters. Numerical results show that the present coupled method is convergent, accurate and stable. The coupled method has a promising potential for practical applications, because it can take advantages of both the meshfree method and FEM when overcome their shortcomings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This book analyses and refines the arguments for and against retrospective rule making, concluding that there is one really strong argument against it: the expectation that, if an individual's actions are considered by a future court, the legal consequences of that action will be determined by the law that was discoverable at the time the action was performed. This argument, which goes to the heart of the rule of law, is generally determinative. However, in some cases the argument does not run and this book suggests that, in some areas of law, reliance should be actively discouraged by prospective warnings that the law is subject to change.