406 resultados para Optimal Portfolio Selection
Resumo:
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of Distributed Generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. This paper addresses the issue of improving the network voltage profile in distribution systems by installing a DG of the most suitable size, at a suitable location. An analytical approach is developed based on algebraic equations for uniformly distributed loads to determine the optimal operation, size and location of the DG in order to achieve required levels of network voltage. The developed method is simple to use for conceptual design and analysis of distribution system expansion with a DG and suitable for a quick estimation of DG parameters (such as optimal operating angle, size and location of a DG system) in a radial network. A practical network is used to verify the proposed technique and test results are presented.
Resumo:
An ongoing challenge in chemistry and crystal engineering is the synthesis of functional materials with predictable structures and customisable properties. This may be achieved by crystallising mixtures of different compounds. Co-crystals formed through this method have predictable structures and their properties may be tuned by varying the ratio of the compounds in the crystallising solution. This thesis examines single crystals formed by the co-crystallisation of metal complexes that have similar structures but different physical or chemical properties. A variety of new compounds with interesting properties were prepared, characterised and their significance in the context of crystal engineering was explored.
Resumo:
Design-build (DB) is regarded as an effective means of delivering high performance green buildings, and the selection of DB contractors is of critical importance. The objective of this study is to evaluate the selection of design-builders for public buildings seeking Leadership in Energy and Environmental Design (LEED) certification and compare the selection practices involved with those of non-LEED-seeking DB projects through a robust content analysis of 74 DB request for proposals (RFPs) for public DB projects. The results of the content analysis reveal that the level of LEED certification is the dominant means of conveying the sustainability requirements in RFPs for contractor selection, with the majority of RFPs (60%) including sustainability requirements as part of the contractor evaluation package. With the exception of contractors' past performance, there is no statistically significant difference in the importance weightings of selection criteria between LEED-seeking and non-LEED-seeking buildings, and DB owners tend to place more emphasis on innovative technical solutions rather than the past performance of DB contractors. Additionally, the research findings also indicate that owners of LEED-seeking building projects tend to provide less design decisions in RFPs in order to solicit innovative design alternatives from potential DB contractors. This study provides DB owners with a number of practical implications for selecting appropriate design-builders for green DB projects.
Resumo:
Techniques for evaluating and selecting multivariate volatility forecasts are not yet understood as well as their univariate counterparts. This paper considers the ability of different loss functions to discriminate between a set of competing forecasting models which are subsequently applied in a portfolio allocation context. It is found that a likelihood-based loss function outperforms its competitors, including those based on the given portfolio application. This result indicates that considering the particular application of forecasts is not necessarily the most effective basis on which to select models.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
Potent and specific enzyme inhibition is a key goal in the development of therapeutic inhibitors targeting proteolytic activity. The backbone-cyclized peptide, Sunflower Trypsin Inhibitor (SFTI-1) affords a scaffold that can be engineered to achieve both these aims. SFTI-1's mechanism of inhibition is unusual in that it shows fast-on/slow-off kinetics driven by cleavage and religation of a scissile bond. This phenomenon was used to select a nanomolar inhibitor of kallikrein-related peptidase 7 (KLK7) from a versatile library of SFTI variants with diversity tailored to exploit distinctive surfaces present in the active site of serine proteases. Inhibitor selection was achieved through the use of size exclusion chromatography to separate protease/inhibitor complexes from unbound inhibitors followed by inhibitor identification according to molecular mass ascertained by mass spectrometry. This approach identified a single dominant inhibitor species with molecular weight of 1562.4 Da, which is consistent with the SFTI variant SFTI-WCTF. Once synthesized individually this inhibitor showed an IC50 of 173.9 ± 7.6 nM against chromogenic substrates and could block protein proteolysis. Molecular modeling analysis suggested that selection of SFTI-WCTF was driven by specific aromatic interactions and stabilized by an enhanced internal hydrogen bonding network. This approach provides a robust and rapid route to inhibitor selection and design.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
Involving the biopsy of an eight-cell embryo, PGD has been hailed as a means of making reproductive decisions without having to face the heart-wrenching decision to abort an affected foetus. However, controversy around the kinds of traits for which testing can be done, and who has access to the technology, has led to questions about the way in which the technology is developing. Women who are allowed to access in vitro fertilisation (IVF) services can currently also access PGD in limited circumstances.
Resumo:
A key concept for the centralized provision of Business Process Management (BPM) is the Center of Excellence (CoE). Organizations establish a CoE (aka BPM Support Office) as their BPM maturity increases in order to ensure a consistent and cost-effective way of offering BPM services. The definition of the offerings of such a center and the allocation of roles and responsibilities play an important role within BPM Governance. In order to plan the role of such a BPM CoE, this chapter proposes the productization of BPM leading to a set of fifteen distinct BPM services. A portfolio management approach is suggested to position these services. The approach allows identifying specific normative strategies for each BPM service, such as further training or BPM communication and marketing. A public sector case study provides further insights into how this approach has been used in practice. Empirical evidence from a survey with 15 organizations confirms the coverage of this set of BPM services and shows typical profiles for such BPM Centers of Excellence.
Resumo:
The top-k retrieval problem aims to find the optimal set of k documents from a number of relevant documents given the user’s query. The key issue is to balance the relevance and diversity of the top-k search results. In this paper, we address this problem using Facility Location Analysis taken from Operations Research, where the locations of facilities are optimally chosen according to some criteria. We show how this analysis technique is a generalization of state-of-the-art retrieval models for diversification (such as the Modern Portfolio Theory for Information Retrieval), which treat the top-k search results like “obnoxious facilities” that should be dispersed as far as possible from each other. However, Facility Location Analysis suggests that the top-k search results could be treated like “desirable facilities” to be placed as close as possible to their customers. This leads to a new top-k retrieval model where the best representatives of the relevant documents are selected. In a series of experiments conducted on two TREC diversity collections, we show that significant improvements can be made over the current state-of-the-art through this alternative treatment of the top-k retrieval problem.
Resumo:
This exploratory study into director selection involved in-depth interviews with Australian non-executive directors to identify what directors consider as important criteria when selecting new members and the approach taken to identify and select candidates. The findings indicate boards select new members based not only on their ability to contribute complementary skills and experience but also on a perceived compatibility with incumbent board members. While these two selection criteria are considered equal in importance, not all selection approaches are able to adequately assess both criteria. As a result many selections fail to realise their selection criteria.
Resumo:
Use of appropriate nursery environments will maximize gain from selection for yield of wheat (Triticum aestivum L.) in the target population of environments of a breeding program. The objective of this study was to investigate how well-irrigated (low-stress) nursery environments predict yield of lines in target environments that varied in degree of water limitation. Fifteen lines were sampled from the preliminary yield evaluation stage of the Queensland wheat breeding program and tested in 26 trials under on-farm conditions (Target Environments) across nine years (1985 to 1993) and also in 27 trials conducted at three research stations (Nursery Environments) in three years (1987 to 1989). The nursery environments were structured to impose different levels of water and nitrogen (N) limitation, whereas the target environments represented a random sample of on-farm conditions from the target population of environments. Indirect selection and pattern analysis methods were used to investigate selection for yield in the nursery environments and gain from selection in the target environments. Yield under low-stress nursery conditions was an effective predictor of yield under similar low-stress target environments (r = 0.89, P < 0.01). However, the value of the low-stress nursery as a predictor of yield in the water-limited target environments decreased with increasing water stress (moderate stress r = 0.53, P < 0.05, to r = 0.38, P > 0.05; severe stress r = -0.08, P > 0.05). Yield in the stress nurseries was a poor predictor of yield in the target environments. Until there is a clear understanding of the physiological-genetic basis of variation for adaptation of wheat to the water-limited environments in Queensland, yield improvement can best be achieved by selection for a combination of yield potential in an irrigated low-stress nursery and yield in on-farm trials that sample the range of water-limited environments of the target population of environments.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
Using a sample of companies from the top 500 listed firms in Australia, we investigate whether the presence of a designated nomination committee and female representation on the nomination committee affect board gender diversity. We also examine whether gender diversity on the board affects firm risk and financial performance. We find that board gender diversity is significantly and positively associated with the presence of a designated nomination committee and that female representation on the nomination committee is a significant explanatory factor of increasing board gender diversity following the release of the 2010 Australian Securities Exchange Corporate Governance Council (ASXCGC) recommendations. Further, our results support the business case for board gender diversity as we find greater gender diversity moderates excessive firm risk which in turn improves firms’ financial performance. Our results are robust after correcting for selection bias and controlling for other board, firm and industry characteristics.
Resumo:
In the electricity market environment, load-serving entities (LSEs) will inevitably face risks in purchasing electricity because there are a plethora of uncertainties involved. To maximize profits and minimize risks, LSEs need to develop an optimal strategy to reasonably allocate the purchased electricity amount in different electricity markets such as the spot market, bilateral contract market, and options market. Because risks originate from uncertainties, an approach is presented to address the risk evaluation problem by the combined use of the lower partial moment and information entropy (LPME). The lower partial moment is used to measure the amount and probability of the loss, whereas the information entropy is used to represent the uncertainty of the loss. Electricity purchasing is a repeated procedure; therefore, the model presented represents a dynamic strategy. Under the chance-constrained programming framework, the developed optimization model minimizes the risk of the electricity purchasing portfolio in different markets because the actual profit of the LSE concerned is not less than the specified target under a required confidence level. Then, the particle swarm optimization (PSO) algorithm is employed to solve the optimization model. Finally, a sample example is used to illustrate the basic features of the developed model and method.