536 resultados para DYNAMIC PORTFOLIO SELECTION
Resumo:
The paper discusses the operating principles and control characteristics of a dynamic voltage restorer (DVR) that protects sensitive but unbalanced and/or distorted loads. The main aim of the DVR is to regulate the voltage at the load terminal irrespective of sag/swell, distortion, or unbalance in the supply voltage. In this paper, the DVR is operated in such a fashion that it does not supply or absorb any active power during the steady-state operation. Hence, a DC capacitor rather than a DC source can supply the voltage source inverter realizing the DVR. The proposed DVR operation is verified through extensive digital computer simulation studies.
Resumo:
This paper presents a novel approach to road-traffic control for interconnected junctions. With a local fuzzy-logic controller (FLC) installed at each junction, a dynamic-programming (DP) technique is proposed to derive the green time for each phase in a traffic-light cycle. Coordination parameters from the adjacent junctions are also taken into consideration so that organized control is extended beyond a single junction. Instead of pursuing the absolute optimization of traffic delay, this study examines a practical approach to enable the simple implementation of coordination among junctions, while attempting to reduce delays, if possible. The simulation results show that the delay per vehicle can be substantially reduced, particularly when the traffic demand reaches the junction capacity. The implementation of this controller does not require complicated or demanding hardware, and such simplicity makes it a useful tool for offline studies or realtime control purposes.
Resumo:
The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.
Resumo:
Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.
Resumo:
The railway service is now the major transportation means in most of the countries around the world. With the increasing population and expanding commercial and industrial activities, a high quality of railway service is the most desirable. Train service usually varies with the population activities throughout a day and train coordination and service regulation are then expected to meet the daily passengers' demand. Dwell time control at stations and fixed coasting point in an inter-station run are the current practices to regulate train service in most metro railway systems. However, a flexible and efficient train control and operation is not always possible. To minimize energy consumption of train operation and make certain compromises on the train schedule, coast control is an economical approach to balance run-time and energy consumption in railway operation if time is not an important issue, particularly at off-peak hours. The capability to identify the starting point for coasting according to the current traffic conditions provides the necessary flexibility for train operation. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting point(s) and investigates the possible improvement on fitness of genes. Single and multiple coasting point control with simple GA are developed to attain the solutions and their corresponding train movement is examined. Further, a hierarchical genetic algorithm (HGA) is introduced here to identify the number of coasting points required according to the traffic conditions, and Minimum-Allele-Reserve-Keeper (MARK) is adopted as a genetic operator to achieve fitter solutions.
Resumo:
There are several noninvasive techniques for assessing the kinetics of tear film, but no comparative studies have been conducted to evaluate their efficacies. Our aim is to test and compare techniques based on high-speed videokeratoscopy (HSV), dynamic wavefront sensing (DWS), and lateral shearing interferometry (LSI). Algorithms are developed to estimate the tear film build-up time TBLD, and the average tear film surface quality in the stable phase of the interblink interval TFSQAv. Moderate but significant correlations are found between TBLD measured with LSI and DWS based on vertical coma (Pearson's r2=0.34, p<0.01) and higher order rms (r2=0.31, p<0.01), as well as between TFSQAv measured with LSI and HSV (r2=0.35, p<0.01), and between LSI and DWS based on the rms fit error (r2=0.40, p<0.01). No significant correlation is found between HSV and DWS. All three techniques estimate tear film build-up time to be below 2.5 sec, and they achieve a remarkably close median value of 0.7 sec. HSV appears to be the most precise method for measuring tear film surface quality. LSI appears to be the most sensitive method for analyzing tear film build-up.
Resumo:
The notion of routines as mechanisms for achieving stability and change in organisations is well established in the organisational theory literature (Becker, 2004). However the relationship between the dynamics of selection, adaptation and retention and the increase or decrease in the varieties of routines which are the result of these processes, is not as well established theoretically or empirically. This paper investigates the processes associated with the evolution of an inter-organisational routine over time. The paper contributes to theory by advancing a conceptual clarification between the dynamics of organisational routines which produce variation, and the varieties of routines which are generated as a result of such processes; and an explanation for the relationship between selection, adaptation and retention dynamics and the creation of variety. The research is supported by analysis of empirical data pertaining to the procurement of engineering assets in a large asset intensive organisation.
Resumo:
The purpose of this study is to contribute to the cross-disciplinary body of literature of identity and organisational culture. This study empirically investigated the Hatch and Schultz (2002) Organisational Identity Dynamics (OID) model to look at linkages between identity, image, and organisational culture. This study used processes defined in the OID model as a theoretical frame by which to understand the relationships between actual and espoused identity manifestations across visual identity, corporate identity, and organisational identity. The linking processes of impressing, mirroring, reflecting, and expressing were discussed at three unique levels in the organisation. The overarching research question of How does the organisational identity dynamics process manifest itself in practice at different levels within an organisation? was used as a means of providing empirical understanding to the previously theoretical OID model. Case study analysis was utilised to provide exploratory data across the organisational groups of: Level A - Senior Marketing and Corporate Communications Management, Level B - Marketing and Corporate Communications Staff, and Level C - Non-Marketing Managers and Employees. Data was collected via 15 in-depth interviews with documentary analysis used as a supporting mechanism to provide triangulation in analysis. Data was analysed against the impressing, mirroring, reflecting, and expressing constructs with specific criteria developed from literature to provide a detailed analysis of each process. Conclusions revealed marked differences in the ways in which OID processes occurred across different levels with implications for the ways in which VI, CI, and OI interact to develop holistic identity across organisational levels. Implications for theory detail the need to understand and utilise cultural understanding in identity programs as well as the value in developing identity communications which represent an actual rather than an espoused position.
Resumo:
Life Cycle Cost Analysis provides a form of synopsis of the initial and consequential costs of building related decisions. These cost figures may be implemented to justify higher investments, for example, in the quality or flexibility of building solutions through a long term cost reduction. The emerging discipline of asset mnagement is a promising approach to this problem, because it can do things that techniques such as balanced scorecards and total quantity cannot. Decisions must be made about operating and maintaining infrastructure assets. An injudicious sensitivity of life cycle costing is that the longer something lasts, the less it costs over time. A life cycle cost analysis will be used as an economic evaluation tool and collaborate with various numbers of analyses. LCCA quantifies incurring costs commonly overlooked (by property and asset managers and designs) as replacement and maintenance costs. The purpose of this research is to examine the Life Cycle Cost Analysis on building floor materials. By implementing the life cycle cost analysis, the true cost of each material will be computed projecting 60 years as the building service life and 5.4% as the inflation rate percentage to classify and appreciate the different among the materials. The analysis results showed the high impact in selecting the floor materials according to the potential of service life cycle cost next.
Resumo:
Griffith University is developing a digital repository system using HarvestRoad Hive software to better meet the needs of academics and students using institutional learning and teaching, course readings, and institutional intellectual capital systems. Issues with current operations and systems are discussed in terms of user behaviour. New repository systems are being designed in such a way that they address current service and user behaviour issues by closely aligning systems with user needs. By developing attractive online services, Griffith is working to change current user behaviour to achieve strategic priorities in the sharing and reuse of learning objects, improved selection and use of digitised course readings, the development of ePrint and eScience services, and the management of a research portfolio service.
Resumo:
Delegation, from the technical point of view, is widely considered as a potential approach in addressing the problem of providing dynamic access control decisions in activities with a high level of collaboration, either within a single security domain or across multiple security domains. Although delegation continues to attract significant attention from the research community, presently, there is no published work that presents a taxonomy of delegation concepts and models. This paper intends to address this gap by presenting a set of taxonomic criteria relevant to the concept of delegation and applies the taxonomy to a selection of significant delegation models published in the literature.
Resumo:
Delegation, from a technical point of view, is widely considered as a potential approach in addressing the problem of providing dynamic access control decisions in activities with a high level of collaboration, either within a single security domain or across multiple security domains. Although delegation continues to attract significant attention from the research community, presently, there is no published work that presents a taxonomy of delegation concepts and models. This article intends to address this gap by presenting a set of taxonomic criteria relevant to the concept of delegation. This article also applies the taxonomy to a selection of significant delegation models published in the literature.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
The aim of this paper is to aid researchers in selecting appropriate qualitative methods in order to develop and improve future studies in the field of emotional design. These include observations, think-aloud protocols, questionnaires, diaries and interviews. Based on the authors’ experiences, it is proposed that the methods under review can be successfully used for collecting data on emotional responses to evaluate user product relationships. This paper reviews the methods; discusses the suitability, advantages and challenges in relation to design and emotion studies. Furthermore, the paper outlines the potential impact of technology on the application of these methods, discusses the implications of these methods for emotion research and concludes with recommendations for future work in this area.