258 resultados para attribute subset selection
Resumo:
The literature and anecdotal evidence suggests that that there is more to tenancy selection (firm location) than the profit maximisation drive that traditional neo-classical economic location theory suggests. In the first instance these models assume property markets are rational and perfectly competitive; the CBD office market is clearly neither rational nor perfectly competitive. This fact alone relegates such models to the margins of usefulness for an industry that seeks to satisfy tenant demand in order to optimise returns on capital invested. Acknowledgment of property market imperfections are universally accepted to the extent that all contemporary texts discuss the lack of a coherent centralised market place and incomplete and poorly disseminated information processes as fundamental inadequacies which characterise the property market inefficiencies. Less well researched are the facets of the market which allow the observer to determine market activity to be significantly irrational. One such facet is that of ‘decision maker preferences’. The decision to locate a business operation at one location as opposed to another seems ostensibly a routine choice based on short, medium and long term business objectives. These objectives are derived from a process of strategic planning by one or more individuals whose goal is held to be to optimise outcomes which benefit the business (and presumably those employed within it). However the decision making processes appear bounded by how firms function, the institutional context in which they operate, as well as by opportunistic behaviour by individual decision makers who allow personal preferences to infiltrate and ‘corrupt’ the process. In this way, history, culture, geography, as well as institutions all become significant to the extent that these influence and shape individual behaviour which in turn determine the morphology of individual preferences, as well as providing a conduit for them to take effect. This paper exams historical and current literature on the impact of individual behaviour in the decision making process within organisations as a precursor to an investigation of the tenancy decision making process within the CBD office market. Literature on the topic falls within a number of research disciplines, philosophy, psychology and economics to name a few.
Resumo:
For a sustainable building industry, not only should the environmental and economic indicators be evaluated but also the societal indicators for building. Current indicators can be in conflict with each other, thus decision making is difficult to clearly quantify and assess sustainability. For the sustainable building, the objectives of decreasing both adverse environmental impact and cost are in conflict. In addition, even though both objectives may be satisfied, building management systems may present other problems such as convenience of occupants, flexibility of building, or technical maintenance, which are difficult to quantify as exact assessment data. These conflicting problems confronting building managers or planners render building management more difficult. This paper presents a methodology to evaluate a sustainable building considering socio-economic and environmental characteristics of buildings, and is intended to assist the decision making for building planners or practitioners. The suggested methodology employs three main concepts: linguistic variables, fuzzy numbers, and an analytic hierarchy process. The linguistic variables are used to represent the degree of appropriateness of qualitative indicators, which are vague or uncertain. These linguistic variables are then translated into fuzzy numbers to reflect their uncertainties and aggregated into the final fuzzy decision value using a hierarchical structure. Through a case study, the suggested methodology is applied to the evaluation of a building. The result demonstrates that the suggested approach can be a useful tool for evaluating a building for sustainability.
Resumo:
Purpose: Choosing the appropriate procurement system for construction projects is a complex and challenging task for clients particularly when professional advice has not been sought. To assist with the decision making process, a range of procurement selection tools and techniques have been developed by both academic and industry bodies. Public sector clients in Western Australia (WA) remain uncertain about the pairing of procurement method to bespoke construction project and how this decision will ultimately impact upon project success. This paper examines ‘how and why’ a public sector agency selected particular procurement methods. · Methodology/Approach: An analysis of two focus group workshops (with 18 senior project and policy managers involved with procurement selection) is reported upon · Findings: The traditional lump sum (TLS) method is still the preferred procurement path even though alternative forms such as design and construct, public-private-partnerships could optimize the project outcome. Paradoxically, workshop participants agreed that alternative procurement forms should be considered, but an embedded culture of uncertainty avoidance invariably meant that TLS methods were selected. Senior managers felt that only a limited number of contractors have the resources and experience to deliver projects using the nontraditional methods considered. · Research limitations/implications: The research identifies a need to develop a framework that public sector clients can use to select an appropriate procurement method. A procurement framework should be able to guide the decision-maker rather than provide a prescriptive solution. Learning from previous experiences with regard to procurement selection will further provide public sector clients with knowledge about how to best deliver their projects.
Resumo:
Decision Support System (DSS) has played a significant role in construction project management. This has been proven that a lot of DSS systems have been implemented throughout the whole construction project life cycle. However, most research only concentrated in model development and left few fundamental aspects in Information System development. As a result, the output of researches are complicated to be adopted by lay person particularly those whom come from a non-technical background. Hence, a DSS should hide the abstraction and complexity of DSS models by providing a more useful system which incorporated user oriented system. To demonstrate a desirable architecture of DSS particularly in public sector planning, we aim to propose a generic DSS framework for consultant selection. It will focus on the engagement of engineering consultant for irrigation and drainage infrastructure. The DSS framework comprise from operational decision to strategic decision level. The expected result of the research will provide a robust framework of DSS for consultant selection. In addition, the paper also discussed other issues that related to the existing DSS framework by integrating enabling technologies from computing. This paper is based on the preliminary case study conducted via literature review and archival documents at Department of Irrigation and Drainage (DID) Malaysia. The paper will directly affect to the enhancement of consultant pre-qualification assessment and selection tools. By the introduction of DSS in this area, the selection process will be more efficient in time, intuitively aided qualitative judgment, and transparent decision through aggregation of decision among stakeholders.
Resumo:
In two experiments, we study how the temporal orientation of consumers (i.e., future-oriented or present-oriented), temporal construal (distant future, near future), and product attribute importance (primary, secondary) influence advertisement evaluations. Data suggest that future-oriented consumers react most favorably to ads that feature a product to be released in the distant future and that highlight primary product attributes. In contrast, present-oriented consumers prefer near-future ads that highlight secondary product attributes. Study 2 shows that consumer attitudes are mediated by perceptions of attribute diagnosticity (i.e., the perceived usefulness of the attribute information). Together, these experiments shed light on how individual differences, such as temporal orientation, offer valuable insights into temporal construal effects in advertising.
Resumo:
The CDIO (Conceive-Design-Implement-Operate) Initiative has been globally recognised as an enabler for engineering education reform. With the CDIO process, the CDIO Standards and the CDIO Syllabus, many scholarly contributions have been made around cultural change, curriculum reform and learning environments. In the Australasian region, reform is gaining significant momentum within the engineering education community, the profession, and higher education institutions. This paper presents the CDIO Syllabus cast into the Australian context by mapping it to the Engineers Australia Graduate Attributes, the Washington Accord Graduate Attributes and the Queensland University of Technology Graduate Capabilities. Furthermore, in recognition that many secondary schools and technical training institutions offer introductory engineering technology subjects, this paper presents an extended self-rating framework suited for recognising developing levels of proficiency at a preparatory level. A demonstrator mapping tool has been created to demonstrate the application of this extended graduate attribute mapping framework as a precursor to an integrated curriculum information model.
Resumo:
The problem of impostor dataset selection for GMM-based speaker verification is addressed through the recently proposed data-driven background dataset refinement technique. The SVM-based refinement technique selects from a candidate impostor dataset those examples that are most frequently selected as support vectors when training a set of SVMs on a development corpus. This study demonstrates the versatility of dataset refinement in the task of selecting suitable impostor datasets for use in GMM-based speaker verification. The use of refined Z- and T-norm datasets provided performance gains of 15% in EER in the NIST 2006 SRE over the use of heuristically selected datasets. The refined datasets were shown to generalise well to the unseen data of the NIST 2008 SRE.
Resumo:
A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.
Resumo:
In this study, the authors propose a novel video stabilisation algorithm for mobile platforms with moving objects in the scene. The quality of videos obtained from mobile platforms, such as unmanned airborne vehicles, suffers from jitter caused by several factors. In order to remove this undesired jitter, the accurate estimation of global motion is essential. However it is difficult to estimate global motions accurately from mobile platforms due to increased estimation errors and noises. Additionally, large moving objects in the video scenes contribute to the estimation errors. Currently, only very few motion estimation algorithms have been developed for video scenes collected from mobile platforms, and this paper shows that these algorithms fail when there are large moving objects in the scene. In this study, a theoretical proof is provided which demonstrates that the use of delta optical flow can improve the robustness of video stabilisation in the presence of large moving objects in the scene. The authors also propose to use sorted arrays of local motions and the selection of feature points to separate outliers from inliers. The proposed algorithm is tested over six video sequences, collected from one fixed platform, four mobile platforms and one synthetic video, of which three contain large moving objects. Experiments show our proposed algorithm performs well to all these video sequences.
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.
Resumo:
This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.
Resumo:
We investigate whether characteristics of the home country capital environment, such as information disclosure and investor rights protection continue to affect ADRs cross-listed in the U.S. Using microstructure measures as proxies for adverse selection, we find that characteristics of the home markets continue to be relevant, especially for emerging market firms. Less transparent disclosure, poorer protection of investor rights and weaker legal institutions are associated with higher levels of information asymmetry. Developed market firms appear to be affected by whether or not home business laws are common law or civil law legal origin. Our finding contributes to the bonding literature. It suggests that cross-listing in the U.S. should not be viewed as a substitute for improvement in the quality of local institutions, and attention must be paid to improve investor protection in order to achieve the full benefits of improved disclosure. Improvement in the domestic capital market environment can attract more investors even for U.S. cross-listed firms.