41 resultados para probabilistic graphical model

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a probabilistic movement model for controlling ant-like agents foraging between two points. Such agents are all identical, simple, autonomous and can only communicate indirectly through the environment. These agents secrete two types of pheromone, one to mark trails towards the goal and another to mark trails back to the starting point. Three pheromone perception strategies are proposed (Strategy A, B and C). Agents that use strategy A perceive the desirability of a neighbouring location as the difference between levels of attractive and repulsive pheromone in that location. With strategy B, agents perceive the desirability of a location as the quotient of levels of attractive and repulsive pheromone. Agents using strategy C determine the product of the levels of attractive pheromone with the complement of levels of repulsive pheromone. We conduct experiments to confirm directionality as emergent property of trails formed by agents that use each strategy. In addition, we compare path formation speed and the quality of the formed path under changes in the environment. We also investigate each strategy's robustness in environments that contain obstacles. Finally, we investigate how adaptive each strategy is when obstacles are eventually removed from the scene and find that the best strategy of these three is strategy A. Such a strategy provides useful guidelines to researchers in further applications of swarm intelligence metaphors for complex problem solving.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software reuse is an important topic due to its potential benefits in increasing product quality and decreasing cost. Although more and more people are aware that not only technical issues, but also nontechnical issues are important to the success of software reuse, people are still not certain which factors will have direct effect on the success of reuse. In this paper, we applied a causal discovery algorithm to the software reuse survey data [2]. Ensemble strategy is incorporated to locate a probable causal model structure for software reuse, and find all those factors which have direct effect on the success of reuse. Our discovery results reinforced some conclusions of Morisio et al. and found some new conclusions which might significantly improve the odds of a reuse project succeeding.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ranking is an important task for handling a large amount of content. Ideally, training data for supervised ranking would include a complete rank of documents (or other objects such as images or videos) for a particular query. However, this is only possible for small sets of documents. In practice, one often resorts to document rating, in that a subset of documents is assigned with a small number indicating the degree of relevance. This poses a general problem of modelling and learning rank data with ties. In this paper, we propose a probabilistic generative model, that models the process as permutations over partitions. This results in super-exponential combinatorial state space with unknown numbers of partitions and unknown ordering among them. We approach the problem from the discrete choice theory, where subsets are chosen in a stagewise manner, reducing the state space per each stage significantly. Further, we show that with suitable parameterisation, we can still learn the models in linear time. We evaluate the proposed models on two application areas: (i) document ranking with the data from the recently held Yahoo! challenge, and (ii) collaborative filtering with movie data. The results demonstrate that the models are competitive against well-known rivals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Discovering knowledge from unstructured texts is a central theme in data mining and machine learning. We focus on fast discovery of thematic structures from a corpus. Our approach is based on a versatile probabilistic formulation – the restricted Boltzmann machine (RBM) –where the underlying graphical model is an undirected bipartite graph. Inference is efficient document representation can be computed with a single matrix projection, making RBMs suitable for massive text corpora available today. Standard RBMs, however, operate on bag-of-words assumption, ignoring the inherent underlying relational structures among words. This results in less coherent word thematic grouping. We introduce graph-based regularization schemes that exploit the linguistic structures, which in turn can be constructed from either corpus statistics or domain knowledge. We demonstrate that the proposed technique improves the group coherence, facilitates visualization, provides means for estimation of intrinsic dimensionality, reduces overfitting, and possibly leads to better classification accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A critical question in data mining is that can we always trust what discovered by a data mining system unconditionally? The answer is obviously not. If not, when can we trust the discovery then? What are the factors that affect the reliability of the discovery? How do they affect the reliability of the discovery? These are some interesting questions to be investigated.

In this paper we will firstly provide a definition and the measurements of reliability, and analyse the factors that affect the reliability. We then examine the impact of model complexity, weak links, varying sample sizes and the ability of different learners to the reliability of graphical model discovery. The experimental results reveal that (1) the larger sample size for the discovery, the higher reliability we will get; (2) the stronger a graph link is, the easier the discovery will be and thus the higher the reliability it can achieve; (3) the complexity of a graph also plays an important role in the discovery. The higher the complexity of a graph is, the more difficult to induce the graph and the lower reliability it would be.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The inherent variability in incoming material and process conditions in sheet metal forming makes quality control and the maintenance of consistency extremely difficult. A single FEM simulation is successful at predicting the formability for a given system, however lacks the ability to capture the variability in an actual production process due to the numerical deterministic nature. This paper investigates a probabilistic analytical model where the variation of five input parameters and their relationship to the sensitivity of springback in a stamping process is examined. A range of sheet tensions are investigated, simulating different operating windows in an attempt to highlight robust regions where the distribution of springback is small. A series of FEM simulations were also performed, to compare with the findings from the analytical model using AutoForm Sigma v4.04 and to validate the analytical model assumptions.

Results show that an increase in sheet tension not only decreases springback, but more importantly reduces the sensitivity of the process to variation. A relative sensitivity analysis has been performed where the most influential parameters and the changes in sensitivity at various sheet tensions have been investigated. Variation in the material parameters, yield stress and n-value were the most influential causes of springback variation, when compared to process input parameters such as friction, which had a small effect. The probabilistic model presented allows manufacturers to develop a more comprehensive assessment of the success of their forming processes by capturing the effects of inherent variation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A critical question in data mining is that can we always trust what discovered by a data mining system unconditionally? The answer is obviously not. If not, when can we trust the discovery then? What are the factors that affect the reliability of the discovery? How do they affect the reliability of the discovery? These are some interesting questions to be investigated. In this chapter we will firstly provide a definition and the measurements of reliability, and analyse the factors that affect the reliability. We then examine the impact of model complexity, weak links, varying sample sizes and the ability of different learners to the reliability of graphical model discovery. The experimental results reveal that (1) the larger sample size for the discovery, the higher reliability we will get; (2) the stronger a graph link is, the easier the discovery will be and thus the higher the reliability it can achieve; (3) the complexity of a graph also plays an important role in the discovery. The higher the complexity of a graph is, the more difficult to induce the graph and the lower reliability it would be. We also examined the performance difference of different discovery algorithms. This reveals the impact of discovery process. The experimental results show the superior reliability and robustness of MML method to standard significance tests in the recovery of graph links with small samples and weak links.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Reasons for the adoption of smart cards and biometric authentication mechanisms have been discussed in the past, yet many organisations are still resorting to traditional methods of authentication. Passwords possess several encumbrances not the least of which includes the difficulty some users have in remembering them. Often users inadvertently write difficult passwords down near the workstation, which negates any security password authentication, may provide and opens the floodgates to identity theft. In the current mainstream authentication paradigm, system administrators must ensure all users are educated on the need for a password policy, and implement it strictly. This paper discusses a conceptual framework for an alternative authentication paradigm. The framework attempts to reduce complexity for the user as well as increase security at the network and application levels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The overarching goal of this dissertation was to evaluate the contextual components of instructional strategies for the acquisition of complex programming concepts. A meta-knowledge processing model is proposed, on the basis of the research findings, thereby facilitating the selection of media treatment for electronic courseware. When implemented, this model extends the work of Smith (1998), as a front-end methodology, for his glass-box interpreter called Bradman, for teaching novice programmers. Technology now provides the means to produce individualized instructional packages with relative ease. Multimedia and Web courseware development accentuate a highly graphical (or visual) approach to instructional formats. Typically, little consideration is given to the effectiveness of screen-based visual stimuli, and curiously, students are expected to be visually literate, despite the complexity of human-computer interaction. Visual literacy is much harder for some people to acquire than for others! (see Chapter Four: Conditions-of-the-Learner) An innovative research programme was devised to investigate the interactive effect of instructional strategies, enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style, on the acquisition of a special category of abstract (process) programming concept. This type of concept was chosen to focus on the role of analogic knowledge involved in computer programming. The results are discussed within the context of the internal/external exchange process, drawing on Ritchey's (1980) concepts of within-item and between-item encoding elaborations. The methodology developed for the doctoral project integrates earlier research knowledge in a novel, interdisciplinary, conceptual framework, including: from instructional science in the USA, for the concept learning models; British cognitive psychology and human memory research, for defining the cognitive style construct; and Australian educational research, to provide the measurement tools for instructional outcomes. The experimental design consisted of a screening test to determine cognitive style, a pretest to determine prior domain knowledge in abstract programming knowledge elements, the instruction period, and a post-test to measure improved performance. This research design provides a three-level discovery process to articulate: 1) the fusion of strategic knowledge required by the novice learner for dealing with contexts within instructional strategies 2) acquisition of knowledge using measurable instructional outcome and learner characteristics 3) knowledge of the innate environmental factors which influence the instructional outcomes This research has successfully identified the interactive effect of instructional strategy, within an individual's cognitive style construct, in their acquisition of complex programming concepts. However, the significance of the three-level discovery process lies in the scope of the methodology to inform the design of a meta-knowledge processing model for instructional science. Firstly, the British cognitive style testing procedure, is a low cost, user friendly, computer application that effectively measures an individual's position on the two cognitive style continua (Riding & Cheema,1991). Secondly, the QUEST Interactive Test Analysis System (Izard,1995), allows for a probabilistic determination of an individual's knowledge level, relative to other participants, and relative to test-item difficulties. Test-items can be related to skill levels, and consequently, can be used by instructional scientists to measure knowledge acquisition. Finally, an Effect Size Analysis (Cohen,1977) allows for a direct comparison between treatment groups, giving a statistical measurement of how large an effect the independent variables have on the dependent outcomes. Combined with QUEST's hierarchical positioning of participants, this tool can assist in identifying preferred learning conditions for the evaluation of treatment groups. By combining these three assessment analysis tools into instructional research, a computerized learning shell, customised for individuals' cognitive constructs can be created (McKay & Garner,1999). While this approach has widespread application, individual researchers/trainers would nonetheless, need to validate with an extensive pilot study programme (McKay,1999a; McKay,1999b), the interactive effects within their specific learning domain. Furthermore, the instructional material does not need to be limited to a textual/graphical comparison, but could be applied to any two or more instructional treatments of any kind. For instance: a structured versus exploratory strategy. The possibilities and combinations are believed to be endless, provided the focus is maintained on linking of the front-end identification of cognitive style with an improved performance outcome. My in-depth analysis provides a better understanding of the interactive effects of the cognitive style construct and instructional format on the acquisition of abstract concepts, involving spatial relations and logical reasoning. In providing the basis for a meta-knowledge processing model, this research is expected to be of interest to educators, cognitive psychologists, communications engineers and computer scientists specialising in computer-human interactions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To tackle the problem of increasing numbers of state transition parameters when the number of sensors increases, we present a probabilistic model together with several parsinomious representations for sensor fusion. These include context specific independence (CSI), mixtures of smaller multinomials and softmax function representations to compactly represent the state transitions of a large number of sensors. The model is evaluated on real-world data acquired through ubiquitous sensors in recognizing daily morning activities. The results show that the combination of CSI and mixtures of smaller multinomials achieves comparable performance with much fewer parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recognising behaviours of multiple people, especially high-level behaviours, is an important task in surveillance systems. When the reliable assignment of people to the set of observations is unavailable, this task becomes complicated. To solve this task, we present an approach, in which the hierarchical hidden Markov model (HHMM) is used for modeling the behaviour of each person and the joint probabilistic data association filters (JPDAF) is applied for data association. The main contributions of this paper lie in the integration of multiple HHMMs for recognising high-level behaviours of multiple people and the construction of the Rao-Blackwellised particle filters (RBPF) for approximate inference. Preliminary experimental results in a real environment show the robustness of our integrated method in behaviour recognition and its advantage over the use of Kalman filter in tracking people.