935 resultados para Computer algorithms.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain decoding of functional Magnetic Resonance Imaging data is a pattern analysis task that links brain activity patterns to the experimental conditions. Classifiers predict the neural states from the spatial and temporal pattern of brain activity extracted from multiple voxels in the functional images in a certain period of time. The prediction results offer insight into the nature of neural representations and cognitive mechanisms and the classification accuracy determines our confidence in understanding the relationship between brain activity and stimuli. In this paper, we compared the efficacy of three machine learning algorithms: neural network, support vector machines, and conditional random field to decode the visual stimuli or neural cognitive states from functional Magnetic Resonance data. Leave-one-out cross validation was performed to quantify the generalization accuracy of each algorithm on unseen data. The results indicated support vector machine and conditional random field have comparable performance and the potential of the latter is worthy of further investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a non-standard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (Geometry of Numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algebraic immunity AI(f) defined for a boolean function f measures the resistance of the function against algebraic attacks. Currently known algorithms for computing the optimal annihilator of f and AI(f) are inefficient. This work consists of two parts. In the first part, we extend the concept of algebraic immunity. In particular, we argue that a function f may be replaced by another boolean function f^c called the algebraic complement of f. This motivates us to examine AI(f ^c ). We define the extended algebraic immunity of f as AI *(f)= min {AI(f), AI(f^c )}. We prove that 0≤AI(f)–AI *(f)≤1. Since AI(f)–AI *(f)= 1 holds for a large number of cases, the difference between AI(f) and AI *(f) cannot be ignored in algebraic attacks. In the second part, we link boolean functions to hypergraphs so that we can apply known results in hypergraph theory to boolean functions. This not only allows us to find annihilators in a fast and simple way but also provides a good estimation of the upper bound on AI *(f).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by privacy issues associated with dissemination of signed digital certificates, we define a new type of signature scheme called a ‘Universal Designated-Verifier Signature’ (UDVS). A UDVS scheme can function as a standard publicly-verifiable digital signature but has additional functionality which allows any holder of a signature (not necessarily the signer) to designate the signature to any desired designated-verifier (using the verifier’s public key). Given the designated-signature, the designated-verifier can verify that the message was signed by the signer, but is unable to convince anyone else of this fact. We propose an efficient deterministic UDVS scheme constructed using any bilinear group-pair. Our UDVS scheme functions as a standard Boneh-Lynn-Shacham (BLS) signature when no verifier-designation is performed, and is therefore compatible with the key-generation, signing and verifying algorithms of the BLS scheme. We prove that our UDVS scheme is secure in the sense of our unforgeability and privacy notions for UDVS schemes, under the Bilinear Diffie-Hellman (BDH) assumption for the underlying group-pair, in the random-oracle model. We also demonstrate a general constructive equivalence between a class of unforgeable and unconditionally-private UDVS schemes having unique signatures (which includes the deterministic UDVS schemes) and a class of ID-Based Encryption (IBE) schemes which contains the Boneh-Franklin IBE scheme but not the Cocks IBE scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term “Human error” can simply be defined as an error which made by a human. In fact, Human error is an explanation of malfunctions, unintended consequents from operating a system. There are many factors that cause a person to have an error due to the unwanted error of human. The aim of this paper is to investigate the relationship of human error as one of the factors to computer related abuses. The paper beings by computer-relating to human errors and followed by mechanism mitigate these errors through social and technical perspectives. We present the 25 techniques of computer crime prevention, as a heuristic device that assists. A last section discussing the ways of improving the adoption of security, and conclusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Outdoor robots such as planetary rovers must be able to navigate safely and reliably in order to successfully perform missions in remote or hostile environments. Mobility prediction is critical to achieving this goal due to the inherent control uncertainty faced by robots traversing natural terrain. We propose a novel algorithm for stochastic mobility prediction based on multi-output Gaussian process regression. Our algorithm considers the correlation between heading and distance uncertainty and provides a predictive model that can easily be exploited by motion planning algorithms. We evaluate our method experimentally and report results from over 30 trials in a Mars-analogue environment that demonstrate the effectiveness of our method and illustrate the importance of mobility prediction in navigating challenging terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a full system demonstration of dynamic sensorbased reconfiguration of a networked robot team. Robots sense obstacles in their environment locally and dynamically adapt their global geometric configuration to conform to an abstract goal shape. We present a novel two-layer planning and control algorithm for team reconfiguration that is decentralised and assumes local (neighbour-to-neighbour) communication only. The approach is designed to be resource-efficient and we show experiments using a team of nine mobile robots with modest computation, communication, and sensing. The robots use acoustic beacons for localisation and can sense obstacles in their local neighbourhood using IR sensors. Our results demonstrate globally-specified reconfiguration from local information in a real robot network, and highlight limitations of standard mesh networks in implementing decentralised algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model’s flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle. The camera has a panoramic field of view with its focal point approximately 5 cm above the ground level, similar to what would be expected from a rat’s point of view. Using established algorithms for calculating perceptual speed from the apparent rate of visual change over time, we generate raw dead reckoning information which loses spatial fidelity over time due to error accumulation. We rectify the loss of fidelity by exploiting the loop-closure detection ability of a biologically inspired, robot navigation model termed RatSLAM. The rectified motion information serves as a velocity input to the HiLAM to encode the environment in the form of grid cell and place cell maps. Finally, we show goal directed path planning results of HiLAM in two different environments, an indoor square maze used in rodent experiments and an outdoor arena more than two orders of magnitude larger than the indoor maze. Together these results bridge for the first time the gap between higher fidelity bio-inspired navigation models (HiLAM) and more abstracted but highly functional bio-inspired robotic mapping systems (RatSLAM), and move from simulated environments into real-world studies in rodent-sized arenas and beyond.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer modelling has been used extensively in some processes in the sugar industry to achieve significant gains. This paper reviews the investigations carried out over approximately the last twenty five years,including the successes but also areas where problems and delays have been encountered. In that time the capability of both hardware and software have increased dramatically. For some processes such as cane cleaning, cane billet preparation, and sugar drying, the application of computer modelling towards improved equipment design and operation has been quite limited. A particular problem has been the large number of particles and particle interactions in these applications, which, if modelled individually, is computationally very intensive. Despite the problems, some attempts have already been made and knowledge gained on tackling these issues. Even if the detailed modelling is wanting, a model can provide some useful insights into the processes. Some options to attack these more intensive problems include the use of commercial software packages, which are usually very robust and allow the addition of user-supplied subroutines to adapt the software to particular problems. Suppliers of such software usually charge a fee per CPU licence, which is often problematic for large problems that require the use of many CPUs. Another option to consider is using open source software that has been developed with the capability to access large parallel resources. Such software has the added advantage of access to the full internal coding. This paper identifies and discusses the detail of software options with the potential capability to achieve improvements in the sugar industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generation of a correlation matrix for set of genomic sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. Each sequence may be millions of bases long and there may be thousands of such sequences which we wish to compare, so not all sequences may fit into main memory at the same time. Each sequence needs to be compared with every other sequence, so we will generally need to page some sequences in and out more than once. In order to minimize execution time we need to minimize this I/O. This paper develops an approach for faster and scalable computing of large-size correlation matrices through the maximal exploitation of available memory and reducing the number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different bioinformatics problems with different correlation matrix sizes. The significant performance improvement of the approach over previous work is demonstrated through benchmark examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mining industry faces concurrent pressures of reducing water use, energy consumption and greenhouse gas (GHG) emissions in coming years. However, the interactions between water and energy use, as well as GHG e missions have largely been neglected in modelling studies to date. In addition, investigations tend to focus on the unit operation scale, with little consideration of whole-of-site or regional scale effects. This paper presents an application of a hierarchical systems model (HSM) developed to represent water, energy and GHG emissions fluxes at scales ranging from the unit operation, to the site level, to the regional level. The model allows for the linkages between water use, energy use and GHG emissions to be examined in a fl exible and intuitive way, so that mine sites can predict energy and emissions impacts of water use reduction schemes and vice versa. This paper examines whether this approach can also be applied to the regional scale with multiple mine sites. The model is used to conduct a case study of several coal mines in the Bowen Basin, Australia, to compare the utility of centralised and decentralised mine water treatment schemes. The case study takes into account geographical factors (such as water pumping distances and elevations), economic factors (such as capital and operating cost curves for desalination treatment plants) and regional factors (such as regionally varying climates and associated variance in mine water volumes and quality). The case study results indicate that treatment of saline mine water incurs a trade-off between water and energy use in all cases. However, significant cost differences between centralised and decentralised schemes can be observed in a simple economic analysis. Further research will examine the possibility for deriving model up-scaling algorithms to reduce computational requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bayesian experimental design is a fast growing area of research with many real-world applications. As computational power has increased over the years, so has the development of simulation-based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Detection of outbreaks is an important part of disease surveillance. Although many algorithms have been designed for detecting outbreaks, few have been specifically assessed against diseases that have distinct seasonal incidence patterns, such as those caused by vector-borne pathogens. Methods We applied five previously reported outbreak detection algorithms to Ross River virus (RRV) disease data (1991-2007) for the four local government areas (LGAs) of Brisbane, Emerald, Redland and Townsville in Queensland, Australia. The methods used were the Early Aberration Reporting System (EARS) C1, C2 and C3 methods, negative binomial cusum (NBC), historical limits method (HLM), Poisson outbreak detection (POD) method and the purely temporal SaTScan analysis. Seasonally-adjusted variants of the NBC and SaTScan methods were developed. Some of the algorithms were applied using a range of parameter values, resulting in 17 variants of the five algorithms. Results The 9,188 RRV disease notifications that occurred in the four selected regions over the study period showed marked seasonality, which adversely affected the performance of some of the outbreak detection algorithms. Most of the methods examined were able to detect the same major events. The exception was the seasonally-adjusted NBC methods that detected an excess of short signals. The NBC, POD and temporal SaTScan algorithms were the only methods that consistently had high true positive rates and low false positive and false negative rates across the four study areas. The timeliness of outbreak signals generated by each method was also compared but there was no consistency across outbreaks and LGAs. Conclusions This study has highlighted several issues associated with applying outbreak detection algorithms to seasonal disease data. In lieu of a true gold standard, a quantitative comparison is difficult and caution should be taken when interpreting the true positives, false positives, sensitivity and specificity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the increasing importance of Application Domain Specific Processor (ADSP) design, a significant challenge is to identify special-purpose operations for implementation as a customized instruction. While many methodologies have been proposed for this purpose, they all work for a single algorithm chosen from the target application domain. Such algorithm-specific approaches are not suitable for designing instruction sets applicable to a whole family of related algorithms. For an entire range of related algorithms, this paper develops a methodology for identifying compound operations, as a basis for designing “domain-specific” Instruction Set Architectures (ISAs) that can efficiently run most of the algorithms in a given domain. Our methodology combines three different static analysis techniques to identify instruction sequences common to several related algorithms: identification of (non-branching) instruction sequences that occur commonly across the algorithms; identification of instruction sequences nested within iterative constructs that are thus executed frequently; and identification of commonly-occurring instruction sequences that span basic blocks. Choosing different combinations of these results enables us to design domain-specific special operations with different desired characteristics, such as performance or suitability as a library function. To demonstrate our approach, case studies are carried out for a family of thirteen string matching algorithms. Finally, the validity of our static analysis results is confirmed through independent dynamic analysis experiments and performance improvement measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For robots operating in outdoor environments, a number of factors, including weather, time of day, rough terrain, high speeds, and hardware limitations, make performing vision-based simultaneous localization and mapping with current techniques infeasible due to factors such as image blur and/or underexposure, especially on smaller platforms and low-cost hardware. In this paper, we present novel visual place-recognition and odometry techniques that address the challenges posed by low lighting, perceptual change, and low-cost cameras. Our primary contribution is a novel two-step algorithm that combines fast low-resolution whole image matching with a higher-resolution patch-verification step, as well as image saliency methods that simultaneously improve performance and decrease computing time. The algorithms are demonstrated using consumer cameras mounted on a small vehicle in a mixed urban and vegetated environment and a car traversing highway and suburban streets, at different times of day and night and in various weather conditions. The algorithms achieve reliable mapping over the course of a day, both when incrementally incorporating new visual scenes from different times of day into an existing map, and when using a static map comprising visual scenes captured at only one point in time. Using the two-step place-recognition process, we demonstrate for the first time single-image, error-free place recognition at recall rates above 50% across a day-night dataset without prior training or utilization of image sequences. This place-recognition performance enables topologically correct mapping across day-night cycles.