189 resultados para trunkpacking, recursive enumeration, graph algorithms, graph simplification
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.
Resumo:
A fundamental problem faced by stereo matching algorithms is the matching or correspondence problem. A wide range of algorithms have been proposed for the correspondence problem. For all matching algorithms, it would be useful to be able to compute a measure of the probability of correctness, or reliability of a match. This paper focuses in particular on one class for matching algorithms, which are based on the rank transform. The interest in these algorithms for stereo matching stems from their invariance to radiometric distortion, and their amenability to fast hardware implementation. This work differs from previous work in that it derives, from first principles, an expression for the probability of a correct match. This method was based on an enumeration of all possible symbols for matching. The theoretical results for disparity error prediction, obtained using this method, were found to agree well with experimental results. However, disadvantages of the technique developed in this chapter are that it is not easily applicable to real images, and also that it is too computationally expensive for practical window sizes. Nevertheless, the exercise provides an interesting and novel analysis of match reliability.
Resumo:
Deciding the appropriate population size and number of is- lands for distributed island-model genetic algorithms is often critical to the algorithm’s success. This paper outlines a method that automatically searches for good combinations of island population sizes and the number of islands. The method is based on a race between competing parameter sets, and collaborative seeding of new parameter sets. This method is applicable to any problem, and makes distributed genetic algorithms easier to use by reducing the number of user-set parameters. The experimental results show that the proposed method robustly and reliably finds population and islands settings that are comparable to those found with traditional trial-and-error approaches.
Resumo:
Distributed Genetic Algorithms (DGAs) designed for the Internet have to take its high communication cost into consideration. For island model GAs, the migration topology has a major impact on DGA performance. This paper describes and evaluates an adaptive migration topology optimizer that keeps the communication load low while maintaining high solution quality. Experiments on benchmark problems show that the optimized topology outperforms static or random topologies of the same degree of connectivity. The applicability of the method on real-world problems is demonstrated on a hard optimization problem in VLSI design.
Resumo:
Linear adaptive channel equalization using the least mean square (LMS) algorithm and the recursive least-squares(RLS) algorithm for an innovative multi-user (MU) MIMOOFDM wireless broadband communications system is proposed. The proposed equalization method adaptively compensates the channel impairments caused by frequency selectivity in the propagation environment. Simulations for the proposed adaptive equalizer are conducted using a training sequence method to determine optimal performance through a comparative analysis. Results show an improvement of 0.15 in BER (at a SNR of 16 dB) when using Adaptive Equalization and RLS algorithm compared to the case in which no equalization is employed. In general, adaptive equalization using LMS and RLS algorithms showed to be significantly beneficial for MU-MIMO-OFDM systems.
Resumo:
In the title salt, C12H11N2O2+·C7H5O6S-, the dihedral angle between the benzene and pyridine rings in the 4-(4-nitrobenzyl)pyridinium cation is 82.7 (2)°. Within the anion there is an intramolecular hydroxy-O-HO(carboxylic acid) bond. In the crystal, the cation forms a single N+-HOsulfonate hydrogen bond with the anion. These cation-anion pairs interact through duplex anion carboxylic acid O-HOsulfonate hydrogen bonds, giving a centrosymmetric cyclic association [graph set R22(16)]. The crystals studied were non-merohedrally twinned.
Resumo:
In the asymmetric unit of the title co-crystal, C12H14N4O2S·C7H5NO4, the sulfamethazine and 2-nitrobenzoic acid molecules form a heterodimer through intermolecular amide-carboxylic acid N-HO and carboxylic acid-pyrimidine O-HN hydrogen-bond pairs, giving a cyclic motif [graph set R22(8)]. The dihedral angle between the two aromatic ring systems in the sulfamethazine molecule is 88.96 (18)° and the nitro group of the acid is 50% rotationally disordered. Secondary aniline N-HOsulfone hydrogen-bonding associations give a two-dimensional structure lying parallel to the ab plane.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
We consider the problem of how to maximize secure connectivity of multi-hop wireless ad hoc networks after deployment. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by secret keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one is based of increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We show that both problems are NP-hard and MAX-SNP (i.e., it is NP-hard to approximate them within a factor of 1 + e for e > 0 ) with a reduction to MAX3SAT problem. Thus, we design and implement a fully distributed algorithm for authenticated key establishment in wireless sensor networks where each sensor knows only its one- hop neighborhood. Our witness based approaches find witnesses in multi-hop neighborhood to authenticate the key establishment between two sensor nodes which do not share a key and which are not connected through a secure path.
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
The main aim of this paper is to describe an adaptive re-planning algorithm based on a RRT and Game Theory to produce an efficient collision free obstacle adaptive Mission Path Planner for Search and Rescue (SAR) missions. This will provide UAV autopilots and flight computers with the capability to autonomously avoid static obstacles and No Fly Zones (NFZs) through dynamic adaptive path replanning. The methods and algorithms produce optimal collision free paths and can be integrated on a decision aid tool and UAV autopilots.
Resumo:
We consider the problem of maximizing the secure connectivity in wireless ad hoc networks, and analyze complexity of the post-deployment key establishment process constrained by physical layer properties such as connectivity, energy consumption and interference. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by shared keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one extends the first problem by increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We prove that both problems are NP-hard and MAX-SNP with a reduction to MAX3SAT problem.