333 resultados para feature selection
Resumo:
Train scheduling is a complex and time consuming task of vital importance. To schedule trains more accurately and efficiently than permitted by current techniques a novel hybrid job shop approach has been proposed and implemented. Unique characteristics of train scheduling are first incorporated into a disjunctive graph model of train operations. A constructive algorithm that utilises this model is then developed. The constructive algorithm is a general procedure that constructs a schedule using insertion, backtracking and dynamic route selection mechanisms. It provides a significant search capability and is valid for any objective criteria. Simulated Annealing and Local Search meta-heuristic improvement algorithms are also adapted and extended. An important feature of these approaches is a new compound perturbation operator that consists of many unitary moves that allows trains to be shifted feasibly and more easily within the solution. A numerical investigation and case study is provided and demonstrates that high quality solutions are obtainable on real sized applications.
Resumo:
Acoustically, vehicles are extremely noisy environments and as a consequence audio-only in-car voice recognition systems perform very poorly. Seeing that the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem. However, implementing such an approach requires a system being able to accurately locate and track the driver’s face and facial features in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using this system, we present our results which show that using the Viola-Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose.
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.
Resumo:
We investigate whether characteristics of the home country capital environment, such as information disclosure and investor rights protection continue to affect ADRs cross-listed in the U.S. Using microstructure measures as proxies for adverse selection, we find that characteristics of the home markets continue to be relevant, especially for emerging market firms. Less transparent disclosure, poorer protection of investor rights and weaker legal institutions are associated with higher levels of information asymmetry. Developed market firms appear to be affected by whether or not home business laws are common law or civil law legal origin. Our finding contributes to the bonding literature. It suggests that cross-listing in the U.S. should not be viewed as a substitute for improvement in the quality of local institutions, and attention must be paid to improve investor protection in order to achieve the full benefits of improved disclosure. Improvement in the domestic capital market environment can attract more investors even for U.S. cross-listed firms.
Resumo:
The programming and retasking of sensor nodes could benefit greatly from the use of a virtual machine (VM) since byte code is compact, can be loaded on demand, and interpreted on a heterogeneous set of devices. The challenge is to ensure good programming tools and a small footprint for the virtual machine to meet the memory constraints of typical WSN platforms. To this end we propose Darjeeling, a virtual machine modelled after the Java VM and capable of executing a substantial subset of the Java language, but designed specifically to run on 8- and 16-bit microcontrollers with 2 - 10 KB of RAM. The Darjeeling VM uses a 16- rather than a 32-bit architecture, which is more efficient on the targeted platforms. Darjeeling features a novel memory organisation with strict separation of reference from non-reference types which eliminates the need for run-time type inspection in the underlying compacting garbage collector. Darjeeling uses a linked stack model that provides light-weight threads, and supports synchronisation. The VM has been implemented on three different platforms and was evaluated with micro benchmarks and a real-world application. The latter includes a pure Java implementation of the collection tree routing protocol conveniently programmed as a set of cooperating threads, and a reimplementation of an existing environmental monitoring application. The results show that Darjeeling is a viable solution for deploying large-scale heterogeneous sensor networks. Copyright 2009 ACM.
Resumo:
Purpose: As part of a comprehensive research study looking at implementing PPPs, interviews with experienced researchers were conducted to realize their views on private sector involvement in public works projects. Design / methodology / approach: Amongst these interviews, five were launched with academics from Hong Kong and Australia, and two were conducted with Legislative Councillors of the Hong Kong Special Administration Region (HKSAR) government. Findings: The interview findings show that both Hong Kong and Australian interviewees had previously conducted some kind of research in the field of PPP. The interviewees highlighted that “Different risk profiles” and “Private sector more innovative / efficient” were the main differences between projects that were procured by PPP and traditionally. Other differences include risk transfer. In a PPP arrangement the public sector passes on a substantial amount of the project risks to the private sector, whereas in a traditional case the public sector would take the largest responsibility in bearing these risks. Another common feature of the private sector is that they tend to be more efficient and innovative when compared to the public sector hence their expertise is often reflected in PPP projects. The interviewees agreed that the key performance indicators for PPP projects were unique depending on the individual project. The critical success factors mentioned by both groups of interviewees included “Transparent process”, “Project dependent” and “Market need”. Due to the fact that PPP projects tend to be large scaled costly projects, adequate transparency in the process is necessary in order to demonstrate that a fair selection and tendering process is conducted. A market need for the project is also important to ensure that the project will be financially secure and that the private sector can make a reasonable profit to cover their project expenditure. Originality / value: The findings from this study have enabled a comparative analysis between the views of researchers in two completely different jurisdictions. With the growing popularity to implement PPP projects, it is believed that the results presented in this paper would be of interest to the industry at large.
Resumo:
Various piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films depend on charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to deteriorate owing to strong vacuum UV, � -, X-ray, energetic particles and atomic oxygen exposure. We have investigated the degradation of PVDF and its copolymers under various stress environments detrimental to reliable operation in space. Initial radiation aging studies have shown complex material changes with lowered Curie temperatures, complex material changes with lowered melting points, morphological transformations and significant crosslinking, but little influence on piezoelectric d33 constants. Complex aging processes have also been observed in accelerated temperature environments inducing annealing phenomena and cyclic stresses. The results suggest that poling and chain orientation are negatively affected by radiation and temperature exposure. A framework for dealing with these complex material qualification issues and overall system survivability predictions in low earth orbit conditions has been established. It allows for improved material selection, feedback for manufacturing and processing, material optimization/stabilization strategies and provides guidance on any alternative materials.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of theses for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of ten published /submitted papers and book chapters of which nine have been published and one is under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of investigating multilevel topologies for high quality and high power applications, with specific emphasis on renewable energy systems. The rapid evolution of renewable energy within the last several years has resulted in the design of efficient power converters suitable for medium and high-power applications such as wind turbine and photovoltaic (PV) systems. Today, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements controlled by powerful processor systems. However, it is hard to connect the traditional converters to the high and medium voltage grids, as a single power switch cannot stand at high voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Besides this important feature, multilevel converters have the capability to generate stepped waveforms. Consequently, in comparison with conventional two-level inverters, they present lower switching losses, lower voltage stress across loads, lower electromagnetic interference (EMI) and higher quality output waveforms. These properties enable the connection of renewable energy sources directly to the grid without using expensive, bulky, heavy line transformers. Additionally, they minimize the size of the passive filter and increase the durability of electrical devices. However, multilevel converters have only been utilised in very particular applications, mainly due to the structural limitations, high cost and complexity of the multilevel converter system and control. New developments in the fields of power semiconductor switches and processors will favor the multilevel converters for many other fields of application. The main application for the multilevel converter presented in this work is the front-end power converter in renewable energy systems. Diode-clamped and cascade converters are the most common type of multilevel converters widely used in different renewable energy system applications. However, some drawbacks – such as capacitor voltage imbalance, number of components, and complexity of the control system – still exist, and these are investigated in the framework of this thesis. Various simulations using software simulation tools are undertaken and are used to study different cases. The feasibility of the developments is underlined with a series of experimental results. This thesis is divided into two main sections. The first section focuses on solving the capacitor voltage imbalance for a wide range of applications, and on decreasing the complexity of the control strategy on the inverter side. The idea of using sharing switches at the output structure of the DC-DC front-end converters is proposed to balance the series DC link capacitors. A new family of multioutput DC-DC converters is proposed for renewable energy systems connected to the DC link voltage of diode-clamped converters. The main objective of this type of converter is the sharing of the total output voltage into several series voltage levels using sharing switches. This solves the problems associated with capacitor voltage imbalance in diode-clamped multilevel converters. These converters adjust the variable and unregulated DC voltage generated by renewable energy systems (such as PV) to the desirable series multiple voltage levels at the inverter DC side. A multi-output boost (MOB) converter, with one inductor and series output voltage, is presented. This converter is suitable for renewable energy systems based on diode-clamped converters because it boosts the low output voltage and provides the series capacitor at the output side. A simple control strategy using cross voltage control with internal current loop is presented to obtain the desired voltage levels at the output voltage. The proposed topology and control strategy are validated by simulation and hardware results. Using the idea of voltage sharing switches, the circuit structure of different topologies of multi-output DC-DC converters – or multi-output voltage sharing (MOVS) converters – have been proposed. In order to verify the feasibility of this topology and its application, steady state and dynamic analyses have been carried out. Simulation and experiments using the proposed control strategy have verified the mathematical analysis. The second part of this thesis addresses the second problem of multilevel converters: the need to improve their quality with minimum cost and complexity. This is related to utilising asymmetrical multilevel topologies instead of conventional multilevel converters; this can increase the quality of output waveforms with a minimum number of components. It also allows for a reduction in the cost and complexity of systems while maintaining the same output quality, or for an increase in the quality while maintaining the same cost and complexity. Therefore, the asymmetrical configuration for two common types of multilevel converters – diode-clamped and cascade converters – is investigated. Also, as well as addressing the maximisation of the output voltage resolution, some technical issues – such as adjacent switching vectors – should be taken into account in asymmetrical multilevel configurations to keep the total harmonic distortion (THD) and switching losses to a minimum. Thus, the asymmetrical diode-clamped converter is proposed. An appropriate asymmetrical DC link arrangement is presented for four-level diode-clamped converters by keeping adjacent switching vectors. In this way, five-level inverter performance is achieved for the same level of complexity of the four-level inverter. Dealing with the capacitor voltage imbalance problem in asymmetrical diodeclamped converters has inspired the proposal for two different DC-DC topologies with a suitable control strategy. A Triple-Output Boost (TOB) converter and a Boost 3-Output Voltage Sharing (Boost-3OVS) converter connected to the four-level diode-clamped converter are proposed to arrange the proposed asymmetrical DC link for the high modulation indices and unity power factor. Cascade converters have shown their abilities and strengths in medium and high power applications. Using asymmetrical H-bridge inverters, more voltage levels can be generated in output voltage with the same number of components as the symmetrical converters. The concept of cascading multilevel H-bridge cells is used to propose a fifteen-level cascade inverter using a four-level H-bridge symmetrical diode-clamped converter, cascaded with classical two-level Hbridge inverters. A DC voltage ratio of cells is presented to obtain maximum voltage levels on output voltage, with adjacent switching vectors between all possible voltage levels; this can minimize the switching losses. This structure can save five isolated DC sources and twelve switches in comparison to conventional cascade converters with series two-level H bridge inverters. To increase the quality in presented hybrid topology with minimum number of components, a new cascade inverter is verified by cascading an asymmetrical four-level H-bridge diode-clamped inverter. An inverter with nineteen-level performance was achieved. This synthesizes more voltage levels with lower voltage and current THD, rather than using a symmetrical diode-clamped inverter with the same configuration and equivalent number of power components. Two different predictive current control methods for the switching states selection are proposed to minimise either losses or THD of voltage in hybrid converters. High voltage spikes at switching time in experimental results and investigation of a diode-clamped inverter structure raised another problem associated with high-level high voltage multilevel converters. Power switching components with fast switching, combined with hard switched-converters, produce high di/dt during turn off time. Thus, stray inductance of interconnections becomes an important issue and raises overvoltage and EMI issues correlated to the number of components. Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. The effect of different transient current loops on busbar physical structure of the high-voltage highlevel diode-clamped converters is highlighted. Design considerations of proper planar busbar are also presented to optimise the overall design of diode-clamped converters.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
Background Some dialysis patients fail to comply with their fluid restriction causing problems due to volume overload. These patients sometimes blame excessive thirst. There has been little work in this area and no work documenting polydipsia among peritoneal dialysis (PD) patients. Methods We measured motivation to drink and fluid consumption in 46 haemodialysis patients (HD), 39 PD patients and 42 healthy controls (HC) using a modified palmtop computer to collect visual analogue scores at hourly intervals. Results Mean thirst scores were markedly depressed on the dialysis day (day 1) for HD (P<0.0001). The profile for day 2 was similar to that of HC. PD generated consistently higher scores than HD day 1 and HC (P = 0.01 vs. HC and P<0.0001 vs HD day 1). Reported mean daily water consumption was similar for HD and PD with both significantly less than HC (P<0.001 for both). However, measured fluid losses were similar for PD and HC whilst HD were lower (P<0.001 for both) suggesting that the PD group may have underestimated their fluid intake. Conclusion Our results indicate that HD causes a protracted period of reduced thirst but that the population's thirst perception is similar to HC on the interdialytic day despite a reduced fluid intake. In contrast, the PD group recorded high thirst scores throughout the day and were apparently less compliant with their fluid restriction. This is potentially important because the volume status of PD patients influences their survival.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.