954 resultados para computer algorithm
Resumo:
Given the serious nature of computer crime, and its global nature and implications, it is clear that there is a crucial need for a common understanding of such criminal activity internationally in order to deal with it effectively. Research into the extent to which legislation, international initiatives, and policy and procedures to combat and investigate computer crime are consistent globally is therefore of enormous importance. The challenge is to study, analyse, and compare the policies and practices of combating computer crime under different jurisdictions in order to identify the extent to which they are consistent with each other and with international guidelines; and the extent of their successes and limitations. The purpose ultimately is to identify areas where improvements are needed and what those improvements should be. This thesis examines approaches used for combating computer crime, including money laundering, in Australia, the UAE, the UK and the USA, four countries which represent a spectrum of economic development and culture. It does so in the context of the guidelines of international organizations such as the Council of Europe (CoE) and the Financial Action Task Force (FATF). In the case of the UAE, we examine also the cultural influences which differentiate it from the other three countries and which has necessarily been a factor in shaping its approaches for countering money laundering in particular. The thesis concludes that because of the transnational nature of computer crime there is a need internationally for further harmonisation of approaches for combating computer crime. The specific contributions of the thesis are as follows: „h Developing a new unified comprehensive taxonomy of computer crime based upon the dual characteristics of the role of the computer and the contextual nature of the crime „h Revealing differences in computer crime legislation in Australia, the UAE, the UK and the USA, and how they correspond to the CoE Convention on Cybercrime and identifying a new framework to develop harmonised computer crime or cybercrime legislation globally „h Identifying some important issues that continue to create problems for law enforcement agencies such as insufficient resources, coping internationally with computer crime legislation that differs between countries, having comprehensive documented procedures and guidelines for combating computer crime, and reporting and recording of computer crime offences as distinct from other forms of crime „h Completing the most comprehensive study currently available regarding the extent of money laundered in four such developed or fast developing countries „h Identifying that the UK and the USA are the most advanced with regard to anti-money laundering and combating the financing of terrorism (AML/CFT) systems among the four countries based on compliance with the FATF recommendations. In addition, the thesis has identified that local factors have affected how the UAE has implemented its financial and AML/CFT systems and reveals that such local and cultural factors should be taken into account when implementing or evaluating any country¡¦s AML/CFT system.
Resumo:
Circuit breaker restrikes are unwanted occurrence, which can ultimately lead to breaker. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks. In 2008 a non-intrusive radiometric restrike measurement method, as well a restrike hardware detection algorithm was developed. The limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current detection methods and algorithms required the use of wide bandwidth current transformers and voltage dividers. A novel non-intrusive restrike diagnostic algorithm using ATP (Alternative Transient Program) and wavelet transforms is proposed. Wavelet transforms are the most common use in signal processing, which is divided into two tests, i.e. restrike detection and energy level based on deteriorated waveforms in different types of restrike. A ‘db5’ wavelet was selected in the tests as it gave a 97% correct diagnostic rate evaluated using a database of diagnostic signatures. This was also tested using restrike waveforms simulated under different network parameters which gave a 92% correct diagnostic responses. The diagnostic technique and methodology developed in this research can be applied to any power monitoring system with slight modification for restrike detection.
Resumo:
Most web service discovery systems use keyword-based search algorithms and, although partially successful, sometimes fail to satisfy some users information needs. This has given rise to several semantics-based approaches that look to go beyond simple attribute matching and try to capture the semantics of services. However, the results reported in the literature vary and in many cases are worse than the results obtained by keyword-based systems. We believe the accuracy of the mechanisms used to extract tokens from the non-natural language sections of WSDL files directly affects the performance of these techniques, because some of them can be more sensitive to noise. In this paper three existing tokenization algorithms are evaluated and a new algorithm that outperforms all the algorithms found in the literature is introduced.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.