969 resultados para Common Scrambling Algorithm Stream Cipher
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.
Resumo:
Hamilton (2001) makes a number of comments on our paper (Harding and Pagan, 2002b). The objectives of this rejoinder are, firstly, to note the areas in which we agree; secondly, to define with greater clarity the areas in which we disagree; and, thirdly, to point to other papers, including a longer version of this response, where we have dealt with some of the issues that he raises. The core of our debate with him is whether one should use an algorithm with a specified set of rules for determining the turning points in economic activity or whether one should use a parametric model that features latent states. Hamilton begins his criticism by stating that there is a philosophical distinction between the two methods for dating cycles and concludes that the method we use “leaves vague and intuitive exactly what this algorithm is intended to measure”. Nothing is further from the truth. When seeking ways to decide on whether a turning point has occurred it is always useful to ask the question, what is a recession? Common usage suggests that it is a decline in the level of economic activity that lasts for some time. For this reason it has become standard to describe a recession as a decline in GDP that lasts for more than two quarters. Finding periods in which quarterly GDP declined for two periods is exactly what our approach does. What is vague about this?
Resumo:
There is not a single, coherent, jurisprudence for civil society organisations. Pressure for a clearly enuciated body of law applying to the whole of this sector of society continues to increase. The rise of third sector scholarship, the retreat of the welfare state, the rediscovery of the concept of civil society and pressures to strengthen social capital have all contributed to an ongoing stream of inquiry into the laws that regulate and favour civil society organisations. There have been almost thirty inquiries over the last sixty years into the doctrine of charitable purpose in common law countries. Those inquiries have established that problems with the law applying to civil society organisations are rooted in the common law adopting a ‘technical’ definition of charitable purpose and the failure of this body of law to develop in response to societal changes. Even though it is now well recognised that problems with law reform stem from problems inherent in the doctrine of charitable purpose, statutory reforms have merely ‘bolted on’ additions to the flawed ‘technical’ definition. In this way the scope of operation of the law has been incrementally expanded to include a larger number of civil society organisations. This piecemeal approach continues the exclusion of most civil society organisations from the law of charities discourse, and fails to address the underlying jurisprudential problems. Comprehensive reform requires revisiting the foundational problems embedded in the doctrine of charitable purpose, being informed by recent scholarship, and a paradigm shift that extends the doctrine to include all civil society organisations. Scholarly inquiry into civil society organisations, particularly from within the discipline of neoclassical economics, has elucidated insights that can inform legal theory development. This theory development requires decoupling the two distinct functions performed by the doctrine of charitable purpose which are: setting the scope of regulation, and determining entitlement to favours, such as tax exemption. If the two different functions of the doctrine are considered separately in the light of theoretical insights from other disciplines, the architecture for a jurisprudence emerges that facilitates regulation, but does not necessarily favour all civil society organisations. Informed by that broader discourse it is argued that when determining the scope of regulation, civil society organisations are identified by reference to charitable purposes that are not technically defined. These charitable purposes are in essence purposes which are: Altruistic, for public Benefit, pursued without Coercion. These charitable puposes differentiate civil society organisations from organisations in the three other sectors namely; Business, which is manifest in lack of altruism; Government, which is characterised by coercion; and Family, which is characterised by benefits being private not public. When determining entitlement to favour, it is theorised that it is the extent or nature of the public benefit evident in the pursuit of a charitable purpose that justifies entitlement to favour. Entitlement to favour based on the extent of public benefit is the theoretically simpler – the greater the public benefit the greater the justification for favour. To be entitled to favour based on the nature of a purpose being charitable the purpose must fall within one of three categories developed from the first three heads of Pemsel’s case (the landmark categorisation case on taxation favour). The three categories proposed are: Dealing with Disadvantage, Encouraging Edification; and Facilitating Freedom. In this alternative paradigm a recast doctrine of charitable purpose underpins a jurisprudence for civil society in a way similar to the way contract underpins the jurisprudence for the business sector, the way that freedom from arbitrary coercion underpins the jurisprudence of the government sector and the way that equity within families underpins succession and family law jurisprudence for the family sector. This alternative architecture for the common law, developed from the doctrine of charitable purpose but inclusive of all civil society purposes, is argued to cover the field of the law applying to civil society organisations and warrants its own third space as a body of law between public law and private law in jurisprudence.
Resumo:
This paper analyzes the common factor structure of US, German, and Japanese Government bond returns. Unlike previous studies, we formally take into account the presence of country-specific factors when estimating common factors. We show that the classical approach of running a principal component analysis on a multi-country dataset of bond returns captures both local and common influences and therefore tends to pick too many factors. We conclude that US bond returns share only one common factor with German and Japanese bond returns. This single common factor is associated most notably with changes in the level of domestic term structures. We show that accounting for country-specific factors improves the performance of domestic and international hedging strategies.
Resumo:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Resumo:
China is now seen as arguably, the next economic giant of the 21st century. From a country closed in the past to the external world, the Chinese market now presents as one of the most lucrative in the world economy. One area that has drawn increasing international interest is education - it has been estimated that by 2020 there will be 25 million excess demands for higher education places that the Chinese tertiary educational system cannot meet. Many overseas institutions have developed programs to cater for this immense potential market. In 2000 the Law Faculty of the University of Technology, Sydney (UTS)introduced a new postgraduate program specifically targeting the Chinese market. This paper is a brief assessment of the program - it examines general issues in the pedagogical delivery of programs in LOTE (Language Other Than English) and the use of 'proxies' in the delivery of LOTE programs. The paper concludes that while the UTS program demonstrates that it is feasible to use proxy lecturers or interpreters in the delivery of programs in LOTE, the exercise entails significant problems that can undermine the integrity of such programs.
Resumo:
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.
Resumo:
This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.
Resumo:
Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the potential advantages of cheaper and increased sampling. An acoustic event detection algorithm is introduced that outputs a compact rectangular marquee description of each event. It can disentangle superimposed events, which are a common occurrence during morning and evening choruses. Next, three uses to which acoustic event detection can be put are illustrated. These tasks have been selected because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are a frequent contaminant of recordings of the terrestrial environment; (2) the detection of bird calls using the spatial distribution of their component events; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle.
Resumo:
A common scenario in many pairing-based cryptographic protocols is that one argument in the pairing is fixed as a long term secret key or a constant parameter in the system. In these situations, the runtime of Miller's algorithm can be significantly reduced by storing precomputed values that depend on the fixed argument, prior to the input or existence of the second argument. In light of recent developments in pairing computation, we show that the computation of the Miller loop can be sped up by up to 37 if precomputation is employed, with our method being up to 19.5 faster than the previous precomputation techniques.
Resumo:
This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the potential advantages of cheaper and increased sampling. An acoustic event detection algorithm is introduced that outputs a compact rectangular marquee description of each event. It can disentangle superimposed events, which are a common occurrence during morning and evening choruses. Next, three uses to which acoustic event detection can be put are illustrated. These tasks have been selected because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are a frequent contaminant of recordings of the terrestrial environment; (2) the detection of bird calls using the spatial distribution of their component events; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle.
Resumo:
Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [3] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [3] is shown.
Resumo:
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the m-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine m-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the m-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis.