258 resultados para attribute subset selection


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of routines as mechanisms for achieving stability and change in organisations is well established in the organisational theory literature (Becker, 2004). However the relationship between the dynamics of selection, adaptation and retention and the increase or decrease in the varieties of routines which are the result of these processes, is not as well established theoretically or empirically. This paper investigates the processes associated with the evolution of an inter-organisational routine over time. The paper contributes to theory by advancing a conceptual clarification between the dynamics of organisational routines which produce variation, and the varieties of routines which are generated as a result of such processes; and an explanation for the relationship between selection, adaptation and retention dynamics and the creation of variety. The research is supported by analysis of empirical data pertaining to the procurement of engineering assets in a large asset intensive organisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The selection of projects and programs of work is a key function of both public and private sector organisations. Ideally, projects and programs that are selected to be undertaken are consistent with strategic objectives for the organisation; will provide value for money and return on investment; will be adequately resourced and prioritised; will not compete with general operations for resources and not restrict the ability of operations to provide income to the organisation; will match the capacity and capability of the organisation to deliver; and will produce outputs that are willingly accepted by end users and customers. Unfortunately,this is not always the case. Possible inhibitors to optimal project portfolio selection include: processes that are inconsistent with the needs of the organisation; reluctance to use an approach that may not produce predetermined preferences; loss of control and perceived decision making power; reliance on quantitative methods rather than qualitative methods for justification; ineffective project and program sponsorship; unclear project governance, processes and linkage to business strategies; ignorance, taboos and perceived effectiveness; inadequate education and training about the processes and their importance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life Cycle Cost Analysis provides a form of synopsis of the initial and consequential costs of building related decisions. These cost figures may be implemented to justify higher investments, for example, in the quality or flexibility of building solutions through a long term cost reduction. The emerging discipline of asset mnagement is a promising approach to this problem, because it can do things that techniques such as balanced scorecards and total quantity cannot. Decisions must be made about operating and maintaining infrastructure assets. An injudicious sensitivity of life cycle costing is that the longer something lasts, the less it costs over time. A life cycle cost analysis will be used as an economic evaluation tool and collaborate with various numbers of analyses. LCCA quantifies incurring costs commonly overlooked (by property and asset managers and designs) as replacement and maintenance costs. The purpose of this research is to examine the Life Cycle Cost Analysis on building floor materials. By implementing the life cycle cost analysis, the true cost of each material will be computed projecting 60 years as the building service life and 5.4% as the inflation rate percentage to classify and appreciate the different among the materials. The analysis results showed the high impact in selecting the floor materials according to the potential of service life cycle cost next.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to aid researchers in selecting appropriate qualitative methods in order to develop and improve future studies in the field of emotional design. These include observations, think-aloud protocols, questionnaires, diaries and interviews. Based on the authors’ experiences, it is proposed that the methods under review can be successfully used for collecting data on emotional responses to evaluate user product relationships. This paper reviews the methods; discusses the suitability, advantages and challenges in relation to design and emotion studies. Furthermore, the paper outlines the potential impact of technology on the application of these methods, discusses the implications of these methods for emotion research and concludes with recommendations for future work in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method of selecting land in any region of Queensland for offsetting purposes is devised, employing uniform standards. The procedure first requires that any core natural asset lands, Crown environmental lands, prime urban and agricultural lands, and highly contentious sites in the region be eliminated from consideration. Other land is then sought that is located between existing large reservations and the centre of greatest potential regional development/disturbance. Using the criteria of rehabilitation (rather than preservation) plus proximity to those officially defined Regional Ecosystems that are most threatened, adjacent sites that are described as ‘Cleared’ are identified in terms of agricultural land capability. Class IV lands – defined as those ‘which may be safely used for occasional cultivation with careful management’,2 ‘where it is favourably located for special usage’,3 and where it is ‘helpful to those who are interested in industry or regional planning or in reconstruction’4 – are examined for their appropriate area, for current tenure and for any conditions such as Mining Leases that may exist. The positive impacts from offsets on adjoining lands can then be designed to be significant; examples are also offered in respect of riparian areas and of Marine Parks. Criteria against which to measure performance for trading purposes include functional lift, with other case studies about this matter reported separately in this issue. The procedure takes no account of demand side economics (financial additionality), which requires commercial rather than environmental analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the impact of individual-specific information processing strategies (IPSs) on the inclusion/exclusion of attributes on the parameter estimates and behavioural outputs of models of discrete choice. Current practice assumes that individuals employ a homogenous IPS with regards to how they process attributes of stated choice (SC) experiments. We show how information collected exogenous of the SC experiment on whether respondents either ignored or considered each attribute may be used in the estimation process, and how such information provides outputs that are IPS segment specific. We contend that accounting the inclusion/exclusion of attributes will result in behaviourally richer population parameter estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proportion of functional sequence in the human genome is currently a subject of debate. The most widely accepted figure is that approximately 5% is under purifying selection. In Drosophila, estimates are an order of magnitude higher, though this corresponds to a similar quantity of sequence. These estimates depend on the difference between the distribution of genomewide evolutionary rates and that observed in a subset of sequences presumed to be neutrally evolving. Motivated by the widening gap between these estimates and experimental evidence of genome function, especially in mammals, we developed a sensitive technique for evaluating such distributions and found that they are much more complex than previously apparent. We found strong evidence for at least nine well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least seven classes in an alignment of four mammals, including human. We also identified at least three rate classes in human ancestral repeats. By positing that the largest of these ancestral repeat classes is neutrally evolving, we estimate that the proportion of nonneutrally evolving sequence is 30% of human ancestral repeats and 45% of the aligned portion of the genome. However, we also question whether any of the classes represent neutrally evolving sequences and argue that a plausible alternative is that they reflect variable structure-function constraints operating throughout the genomes of complex organisms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some of my most powerful spiritual experiences have come from the splendorous and sublime sounding hymns performed by a choir and church organ at the traditional Anglican church I’ve attended since I was very young. In the later stage of my life, my pursuit of education in the field of engineering caused me to move to Australia where I regularly attended a contemporary evangelical church and subsequently became a music director in the faith community. This environmental and cultural shift altered my perception and musical experiences of Christian music and led me to enquire about the relationship between Christian liturgy and church music. Throughout history church musicians and composers have synthesised the theological, congregational, cultural and musical aspects of church liturgy. Many great composers have taken into account the conditions surrounding the process of sacred composition and arrangement of music to enhance the experience of religious ecstasy – they sought resonances with Christian values and beliefs to draw congregational participation into the light of praising and glorifying God. As a music director in an evangelical church this aspiration has become one I share. I hope to identify and define the qualities of these resonances that have been successful and apply them to my own practice. Introduction and Structure of the Thesis In this study I will examine four purposively selected excerpts of Christian church vocal music combining theomusicological and semiotic analysis to help identify guidelines that might be useful in my practice as a church music director. The four musical excerpts have been selected based upon their sustained musical and theological impact over time, and their ability to affect ecstatic responses from congregations. This thesis documents a personal journey through analysis of music and uses a context that draws upon ethno-musicological, theological and semiotic tools that lead to a preliminary framework and principles which can then be applied to the identified qualities of resonance in church music today. The thesis is comprised of four parts. Part 1 presents a literature study on the relationship between sacred music, the effects of religious ecstasy and the Christian church. Multiple lenses on this phenomenon are drawn from the viewpoints of prominent western church historians, Biblical theologians, and philosophers. The literature study continues in Part 2, where the role of embodiment is examined from the current perspective of cognitive learning environments. This study offers a platform for a critical reflection on two distinctive musical liturgical systems that have treated differently the notion of embodied understanding amidst a shifting church paradigm. This allows an in-depth theological and philosophical understanding of the liturgical conditions around sacred music-making that relates to the monistic and dualistic body/mind. Part 3 involves undertaking a theomusicological methodology that utilises creative case studies of four purposively selected spiritual pieces. A semiotic study focuses on specific sections of sacred vocal works that express the notions of ‘praise’ and ‘glorification’, particularly in relation to these effects,which combine an analysis of theological perspectives around religious ecstasy and particular spiritual themes. Part 4 presents the critiques and findings gathered from the study that incorporate theoretical and technological means to analyse the purposive selected musical artefact, particularly with the sonic narratives expressing notions of ‘Praise' and 'Glory’. The musical findings are further discussed in relation to the notion of resonance, and then a conceptual framework for the role of contemporary musicdirector is proposed. The musical and Christian terminologies used in the thesis are explained in the glossary, and the appendices includes tables illustrating the musical findings, conducted surveys, written musical analyses and audio examples of selected sacred pieces available on the enclosed compact disc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.