236 resultados para graph matching algorithms
Resumo:
In this paper, the train scheduling problem is modelled as a blocking parallel-machine job shop scheduling (BPMJSS) problem. In the model, trains, single-track sections and multiple-track sections, respectively, are synonymous with jobs, single machines and parallel machines, and an operation is regarded as the movement/traversal of a train across a section. Due to the lack of buffer space, the real-life case should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold the train until next section on the routing becomes available. Based on literature review and our analysis, it is very hard to find a feasible complete schedule directly for BPMJSS problems. Firstly, a parallel-machine job-shop-scheduling (PMJSS) problem is solved by an improved shifting bottleneck procedure (SBP) algorithm without considering blocking conditions. Inspired by the proposed SBP algorithm, feasibility satisfaction procedure (FSP) algorithm is developed to solve and analyse the BPMJSS problem, by an alternative graph model that is an extension of the classical disjunctive graph models. The proposed algorithms have been implemented and validated using real-world data from Queensland Rail. Sensitivity analysis has been applied by considering train length, upgrading track sections, increasing train speed and changing bottleneck sections. The outcomes show that the proposed methodology would be a very useful tool for the real-life train scheduling problems
Resumo:
Train scheduling is a complex and time consuming task of vital importance. To schedule trains more accurately and efficiently than permitted by current techniques a novel hybrid job shop approach has been proposed and implemented. Unique characteristics of train scheduling are first incorporated into a disjunctive graph model of train operations. A constructive algorithm that utilises this model is then developed. The constructive algorithm is a general procedure that constructs a schedule using insertion, backtracking and dynamic route selection mechanisms. It provides a significant search capability and is valid for any objective criteria. Simulated Annealing and Local Search meta-heuristic improvement algorithms are also adapted and extended. An important feature of these approaches is a new compound perturbation operator that consists of many unitary moves that allows trains to be shifted feasibly and more easily within the solution. A numerical investigation and case study is provided and demonstrates that high quality solutions are obtainable on real sized applications.
Resumo:
An issue on generative music in Contemporary Music Review allows space to explore many of these controversies, and to explore the rich algorithmic scene in contemporary practice, as well as the diverse origins and manifestations of such a culture. A roster of interesting exponents from both academic and arts practice backgrounds are involved, matching the broad spectrum of current work. Contributed articles range from generative algorithms in live systems, from live coding to interactive music systems to computer games, through algorithmic modelling of longer-term form, evolutionary algorithms, to interfaces between modalities and mediums, in algorithmic choreography. A retrospective on the intensive experimentation into algorithmic music and sound synthesis at the Institute of Sonology in the 1960s and 70s creates a complementary strand, as well as an open paper on the issues raised by open source, as opposed to proprietary, software and operating systems, with consequences in the creation and archiving of algorithmic work.
Resumo:
In this paper, we classify, review, and experimentally compare major methods that are exploited in the definition, adoption, and utilization of element similarity measures in the context of XML schema matching. We aim at presenting a unified view which is useful when developing a new element similarity measure, when implementing an XML schema matching component, when using an XML schema matching system, and when comparing XML schema matching systems.
Resumo:
Acquiring accurate silhouettes has many applications in computer vision. This is usually done through motion detection, or a simple background subtraction under highly controlled environments (i.e. chroma-key backgrounds). Lighting and contrast issues in typical outdoor or office environments make accurate segmentation very difficult in these scenes. In this paper, gradients are used in conjunction with intensity and colour to provide a robust segmentation of motion, after which graph cuts are utilised to refine the segmentation. The results presented using the ETISEO database demonstrate that an improved segmentation is achieved through the combined use of motion detection and graph cuts, particularly in complex scenes.
Resumo:
The research project described in this paper was designed to explore the potential of a wiki to facilitate collaboration and to reduce the isolation of postgraduate students enrolled in a professional doctoral program at a Queensland university. It was also intended to foster a community of practice for reviewing and commenting on one another’s work despite the small number of students and their disparate topics. The students were interviewed and surveyed at the beginning and during the face-to-face sessions of the course and their wikis were examined over the year to monitor, analyse and evaluate the extent to which the agency of the technology (wiki) mediated their development of scholarly skills. The study showed that students paradoxically eschewed use of the structured wiki and formed their own informal networks. This paper will contend that this paradox arose from a mismatch between the agency of technology and its intended purpose.
Resumo:
There has been a developing interest in smart grids, the possibility of significantly enhanced performance from remote measurements and intelligent controls. For transmission the use of PMU signals from remote sites and direct load shed controls can give significant enhancement for large system disturbances rather than relying on local measurements and linear controls. This lecture will emphasize what can be found from remote measurements and the mechanisms to get a smarter response to major disturbances. For distribution systems there has been a significant history in the area of distribution reconfiguration automation. This lecture will emphasize the incorporation of Distributed Generation into distribution networks and the impact on voltage/frequency control and protection. Overall the performance of both transmission and distribution will be impacted by demand side management and the capabilities built into the system. In particular, we consider different time scales of load communication and response and look to the benefits for system, energy and lines.
Resumo:
High-speed broadband internet access is widely recognised as a catalyst to social and economic development, having a significant impact on global economy. Rural Australia’s inherent dispersed population over a large geographical area make the delivery of efficient, well-maintained and cost-effective internet a challenging task. The novel and highly-efficient Multi-User-Single-Antenna for MIMO (MUSA-MIMO) broadband wireless communication technology can effectively be used to deliver wireless broadband access to rural areas. This research aims to develop for the first time, an efficient and accurate algorithm for the tracking and prediction of Channel State Information (CSI) at the transmitter, by characterising time variation effects of the wireless communication channel on the performance of a highly-efficient MUSA-MIMO technology particularly suited for rural communities, improving their quality of life and economic prosperity.
Resumo:
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Resumo:
ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.