410 resultados para adaptive operator selection
Resumo:
The selection of projects and programs of work is a key function of both public and private sector organisations. Ideally, projects and programs that are selected to be undertaken are consistent with strategic objectives for the organisation; will provide value for money and return on investment; will be adequately resourced and prioritised; will not compete with general operations for resources and not restrict the ability of operations to provide income to the organisation; will match the capacity and capability of the organisation to deliver; and will produce outputs that are willingly accepted by end users and customers. Unfortunately,this is not always the case. Possible inhibitors to optimal project portfolio selection include: processes that are inconsistent with the needs of the organisation; reluctance to use an approach that may not produce predetermined preferences; loss of control and perceived decision making power; reliance on quantitative methods rather than qualitative methods for justification; ineffective project and program sponsorship; unclear project governance, processes and linkage to business strategies; ignorance, taboos and perceived effectiveness; inadequate education and training about the processes and their importance.
Resumo:
In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.
Resumo:
Life Cycle Cost Analysis provides a form of synopsis of the initial and consequential costs of building related decisions. These cost figures may be implemented to justify higher investments, for example, in the quality or flexibility of building solutions through a long term cost reduction. The emerging discipline of asset mnagement is a promising approach to this problem, because it can do things that techniques such as balanced scorecards and total quantity cannot. Decisions must be made about operating and maintaining infrastructure assets. An injudicious sensitivity of life cycle costing is that the longer something lasts, the less it costs over time. A life cycle cost analysis will be used as an economic evaluation tool and collaborate with various numbers of analyses. LCCA quantifies incurring costs commonly overlooked (by property and asset managers and designs) as replacement and maintenance costs. The purpose of this research is to examine the Life Cycle Cost Analysis on building floor materials. By implementing the life cycle cost analysis, the true cost of each material will be computed projecting 60 years as the building service life and 5.4% as the inflation rate percentage to classify and appreciate the different among the materials. The analysis results showed the high impact in selecting the floor materials according to the potential of service life cycle cost next.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
The aim of this paper is to aid researchers in selecting appropriate qualitative methods in order to develop and improve future studies in the field of emotional design. These include observations, think-aloud protocols, questionnaires, diaries and interviews. Based on the authors’ experiences, it is proposed that the methods under review can be successfully used for collecting data on emotional responses to evaluate user product relationships. This paper reviews the methods; discusses the suitability, advantages and challenges in relation to design and emotion studies. Furthermore, the paper outlines the potential impact of technology on the application of these methods, discusses the implications of these methods for emotion research and concludes with recommendations for future work in this area.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.
Resumo:
This paper argues a model of adaptive design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of efficient use of energy and material resource in the life-cycle of buildings, active involvement of the occupants into micro-climate control within the building, and the natural environment as the physical context. The interactions amongst all the parameters compose a complex system of sustainable architecture design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The latest interpretation of the Second Law of Thermodynamics states a microscopic formulation of an entropy evolution of complex open systems. It provides a design framework for an adaptive system evolves for the optimization in open systems, this adaptive system evolves for the optimization of building environmental performance. The paper concludes that adaptive modelling in entropy evolution is a design alternative for sustainable architecture.
Resumo:
A method of selecting land in any region of Queensland for offsetting purposes is devised, employing uniform standards. The procedure first requires that any core natural asset lands, Crown environmental lands, prime urban and agricultural lands, and highly contentious sites in the region be eliminated from consideration. Other land is then sought that is located between existing large reservations and the centre of greatest potential regional development/disturbance. Using the criteria of rehabilitation (rather than preservation) plus proximity to those officially defined Regional Ecosystems that are most threatened, adjacent sites that are described as ‘Cleared’ are identified in terms of agricultural land capability. Class IV lands – defined as those ‘which may be safely used for occasional cultivation with careful management’,2 ‘where it is favourably located for special usage’,3 and where it is ‘helpful to those who are interested in industry or regional planning or in reconstruction’4 – are examined for their appropriate area, for current tenure and for any conditions such as Mining Leases that may exist. The positive impacts from offsets on adjoining lands can then be designed to be significant; examples are also offered in respect of riparian areas and of Marine Parks. Criteria against which to measure performance for trading purposes include functional lift, with other case studies about this matter reported separately in this issue. The procedure takes no account of demand side economics (financial additionality), which requires commercial rather than environmental analysis.
Resumo:
Becoming a teacher in technology-rich classrooms is a complex and challenging transition for career-change entrants. Those with generic or specialist Information and Communication Technology (ICT) expertise bring a mindset about purposeful uses of ICT that enrich student learning and school communities. The transition process from a non-education environment is both enhanced and constrained by shifting the technology context of generic or specialist ICT expertise, developed through a former career as well as general life experience. In developing an understanding of the complexity of classrooms and creating a learner centred way of working, perceptions about learners and learning evolve and shift. Shifts in thinking about how ICT expertise supports learners and enhances learning preceded shifts in perceptions about being a teacher, working with colleagues, and functioning in schools that have varying degrees of intensity and impact on evolving professional identities. Current teacher education and school induction programs are seen to be falling short of meeting the needs of career-change entrants and, as a flow on, the students they nurture. Research (see, for example, Tigchelaar, Brouwer, & Korthagen, 2008; Williams & Forgasz, 2009) highlights the value of generic and specialist expertise career-change teachers bring to the profession and draws attention to the challenges such expertise begets (Anthony & Ord, 2008; Priyadharshini & Robinson-Pant, 2003). As such, the study described in this thesis investigated perceptions of career-change entrants, who have generic (Mishra & Koehler, 2006) or specialist expertise, that is, ICT qualifications and work experience in the use of ICT. The career-change entrants‘ perceptions were sought as they shifted the technology context and transitioned into teaching in technology-rich classrooms. The research involved an interpretive analysis of qualitative data and quantitative data. The study used the explanatory case study (Yin, 1994) methodology enriched through grounded theory processes (Strauss & Corbin, 1998), to develop a theory about professional identity transition from the perceptions of the participants in the study. The study provided insights into the expertise and experiences of career change entrants, particularly in relation to how professional identities that include generic and specialist ICT knowledge and expertise were reconfigured while transitioning into the teaching profession. This thesis presents the Professional Identity Transition Theory that encapsulates perceptions about teaching in technology-rich classrooms amongst a selection of the increasing number of career-change entrants. The theory, grounded in the data, (Strauss & Corbin, 1998) proposes that career-change entrants experience transition phases of varying intensity that impact on professional identity, retention and development as a teacher. These phases are linked to a shift in perceptions rather than time as a teacher. Generic and specialist expertise in the use of ICT is a weight of the past and an asset that makes the transition process more challenging for career-change entrants. The study showed that career-change entrants used their experiences and perceptions to develop a way of working in a school community. Their way of working initially had an adaptive orientation focussed on immediate needs as their teaching practice developed. Following a shift of thinking, more generative ways of working focussed on the future emerged to enable continual enhancement and development of practice. Sustaining such learning is a personal, school and systemic challenge for the teaching profession.
Resumo:
Attending potentially dangerous and traumatic incidents is inherent in the role of emergency workers, yet there is a paucity of literature aimed at examining variables that impact on the outcomes of such exposure. Coping has been implicated in adjusting to trauma in other contexts, and this study explored the effectiveness of coping strategies in relation to positive and negative posttrauma outcomes in the emergency services environment. One hundred twenty-five paramedics completed a survey battery including the Posttraumatic Growth Inventory (PTGI; Tedeschi & Calhoun, 1996), the Impact of Events Scale–Revised (IES-R; Weiss & Marmar, 1997), and the Revised-COPE (Zuckerman & Gagne, 2003). Results from the regression analysis demonstrated that specific coping strategies were differentially associated with positive and negative posttrauma outcomes. The research contributes to a more comprehensive understanding regarding the effectiveness of coping strategies employed by paramedics in managing trauma, with implications for their psychological well-being as well as the training and support services available.
Resumo:
This paper addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. We use empirical GPS and radio contact data from a largescale animal tracking deployment to model node mobility, GPS and radio performance. These models are used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose a versatile contact logging strategy that relies on RSSI ranging and GPS lock back-offs for reducing the node energy consumption relative to GPS duty cycling. Results show that our strategy can cut the node energy consumption by half while meeting application specific positioning criteria.
Resumo:
We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.
Resumo:
This paper presents an explanation of why the reuse of building components after demolition or deconstruction is critical to the future of the construction industry. An examination of the historical cause and response to climate change sets the scene as to why governance is becoming increasingly focused on the built environment as a mechanism to controlling waste generation associated with the process of demolition, construction and operation. Through an annotated description to the evolving design and construction methodology of a range of timber dwellings (typically 'Queenslanders' during the eras of 1880-1900, 1900-1920 & 1920-1940) the paper offers an evaluation to the variety of materials, which can be used advantageously by those wishing to 'regenerate' a Queenslander. This analysis of 'regeneration' details the constraints when considering relocation and/ or reuse by adaption including deconstruction of building components against the legislative framework requirements of the Queensland Building Act 1975 and the Queensland Sustainable Planning Act 2009, with a specific examination to those of the Building Codes of Australia. The paper concludes with a discussion of these constraints, their impacts on 'regeneration' and the need for further research to seek greater understanding of the practicalities and drivers of relocation, adaptive and building components suitability for reuse after deconstruction.