363 resultados para Selection index

em Queensland University of Technology - ePrints Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cane fibre content has increased over the past ten years. Some of that increase can be attributed to new varieties selected for release. This paper reviews the existing methods for quantifying the fibre characteristics of a variety, including fibre content and fibre quality measurements – shear strength, impact resistance and short fibre content. The variety selection process is presented and it is reported that fibre content has zero weighting in the current selection index. An updated variety selection approach is proposed, potentially replacing the existing selection process relating to fibre. This alternative approach involves the use of a more complex mill area level model that accounts for harvesting, transport and processing equipment, taking into account capacity, efficiency and operational impacts, along with the end use for the bagasse. The approach will ultimately determine a net economic value for the variety. The methodology lends itself to a determination of the fibre properties that have a significant impact on the economic value so that variety tests can better target the critical properties. A low-pressure compression test is proposed as a good test to provide an assessment of the impact of a variety on milling capacity. NIR methodology is proposed as a technology to lead to a more rapid assessment of fibre properties, and hence the opportunity to more comprehensively test for fibre impacts at an earlier stage of variety development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: Dietary glycemic index (GI) and glycemic load (GL) have been associated with risk of chronic diseases, yet limited research exists on patterns of consumption in Australia. Our aims were to investigate glycemic carbohydrate in a population of older women, identify major contributing food sources, and determine low, moderate and high ranges. Methods: Subjects were 459 Brisbane women aged 42-81 years participating in the Longitudinal Assessment of Ageing in Women. Diet history interviews were used to assess usual diet and results were analysed into energy and macronutrients using the FoodWorks dietary analysis program combined with a customised GI database. Results: Mean±SD dietary GI was 55.6±4.4% and mean dietary GL was 115±25. A low GI in this population was ≤52.0, corresponding to the lowest quintile of dietary GI, and a low GL was ≤95. GI showed a quadratic relationship with age (P=0.01), with a slight decrease observed in women aged in their 60’s relative to younger or older women. GL decreased linearly with age (P<0.001). Bread was the main contributor to carbohydrate and dietary GL (17.1% and 20.8%, respectively), followed by fruit (15.5% and 14.2%), and dairy for carbohydrate (9.0%) or breakfast cereals for GL (8.9%). Conclusions: In this population, dietary GL decreased with increasing age, however this was likely to be a result of higher energy intakes in younger women. Focus on careful selection of lower GI items within bread and breakfast cereal food groups would be an effective strategy for decreasing dietary GL in this population of older women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under pressure from both the ever increasing level of market competition and the global financial crisis, clients in consumer electronics (CE) industry are keen to understand how to choose the most appropriate procurement method and hence to improve their competitiveness. Four rounds of Delphi questionnaire survey were conducted with 12 experts in order to identify the most appropriate procurement method in the Hong Kong CE industry. Five key selection criteria in the CE industry are highlighted, including product quality, capability, price competition, flexibility and speed. This study also revealed that product quality was found to be the most important criteria for the “First type used commercially” and “Major functional improvements” projects. As for “Minor functional improvements” projects, price competition was the most crucial factor to be considered during the PP selection. These research findings provide owners with useful insights to select the procurement strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Project selection is a complex decision making process that is not merely influenced by the technical aspects of the project. Selection of road infrastructure projects in the Indonesian public sector is generally conducted at an organisational level, which involves multiple objectives, constraints and stakeholders. Hence, a deeper understanding of the various organisational drivers that impact on such decisions, in particular organisational culture, is much needed for improving decision-making processes as it has been posited by some researchers that organisational culture can become either an enabler, or a barrier, to the process. One part of the cultural assessment undertaken as part of the research, identifies and analyses the cultural types of relevant and involved organisations in the decision making process. The organisational culture assessment instrument (OCAI) of Cameron and Quinn (2011) was utilized in this study and the data was taken from three selected provinces in Indonesia. The results can facilitate the surveyed (and similar) organisations to improve their performance by moving towards a more appropriate cultural typology that is arguably better suited to their operations and to improving their organisational processes to more closely align with their organisational vision, mission and objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality based frame selection is a crucial task in video face recognition, to both improve the recognition rate and to reduce the computational cost. In this paper we present a framework that uses a variety of cues (face symmetry, sharpness, contrast, closeness of mouth, brightness and openness of the eye) to select the highest quality facial images available in a video sequence for recognition. Normalized feature scores are fused using a neural network and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face recognition system. Experiments on the Honda/UCSD database shows that the proposed method selects the best quality face images in the video sequence, resulting in improved recognition performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is evidence across several species for genetic control of phenotypic variation of complex traits1, 2, 3, 4, such that the variance among phenotypes is genotype dependent. Understanding genetic control of variability is important in evolutionary biology, agricultural selection programmes and human medicine, yet for complex traits, no individual genetic variants associated with variance, as opposed to the mean, have been identified. Here we perform a meta-analysis of genome-wide association studies of phenotypic variation using ~170,000 samples on height and body mass index (BMI) in human populations. We report evidence that the single nucleotide polymorphism (SNP) rs7202116 at the FTO gene locus, which is known to be associated with obesity (as measured by mean BMI for each rs7202116 genotype)5, 6, 7, is also associated with phenotypic variability. We show that the results are not due to scale effects or other artefacts, and find no other experiment-wise significant evidence for effects on variability, either at loci other than FTO for BMI or at any locus for height. The difference in variance for BMI among individuals with opposite homozygous genotypes at the FTO locus is approximately 7%, corresponding to a difference of ~0.5 kilograms in the standard deviation of weight. Our results indicate that genetic variants can be discovered that are associated with variability, and that between-person variability in obesity can partly be explained by the genotype at the FTO locus. The results are consistent with reported FTO by environment interactions for BMI8, possibly mediated by DNA methylation9, 10. Our BMI results for other SNPs and our height results for all SNPs suggest that most genetic variants, including those that influence mean height or mean BMI, are not associated with phenotypic variance, or that their effects on variability are too small to detect even with samples sizes greater than 100,000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.

Relevância:

20.00% 20.00%

Publicador: