932 resultados para Optimized using


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an overview of technical solutions for regional area precise GNSS positioning services such as in Queensland. The research focuses on the technical and business issues that currently constrain GPS-based local area Real Time Kinematic (RTK) precise positioning services so as to operate in future across larger regional areas, and therefore support services in agriculture, mining, utilities, surveying, construction, and others. The paper first outlines an overall technical framework that has been proposed to transition the current RTK services to future larger scale coverage. The framework enables mixed use of different reference GNSS receiver types, dual- or triple-frequency, single or multiple systems, to provide RTK correction services to users equipped with any type of GNSS receivers. Next, data processing algorithms appropriate for triple-frequency GNSS signals are reviewed and some key performance benefits of using triple carrier signals for reliable RTK positioning over long distances are demonstrated. A server-based RTK software platform is being developed to allow for user positioning computations at server nodes instead of on the user's device. An optimal deployment scheme for reference stations across a larger-scale network has been suggested, given restrictions such as inter-station distances, candidates for reference locations, and operational modes. For instance, inter-station distances between triple-frequency receivers can be extended to 150km, which doubles the distance between dual-frequency receivers in the existing RTK network designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.