837 resultados para Efficient Exploration
Resumo:
Markowitz showed that assets can be combined to produce an 'Efficient' portfolio that will give the highest level of portfolio return for any level of portfolio risk, as measured by the variance or standard deviation. These portfolios can then be connected to generate what is termed an 'Efficient Frontier' (EF). In this paper we discuss the calculation of the Efficient Frontier for combinations of assets, again using the spreadsheet Optimiser. To illustrate the derivation of the Efficient Frontier, we use the data from the Investment Property Databank Long Term Index of Investment Returns for the period 1971 to 1993. Many investors might require a certain specific level of holding or a restriction on holdings in at least some of the assets. Such additional constraints may be readily incorporated into the model to generate a constrained EF with upper and/or lower bounds. This can then be compared with the unconstrained EF to see whether the reduction in return is acceptable. To see the effect that these additional constraints may have, we adopt a fairly typical pension fund profile, with no more than 20% of the total held in Property. The paper shows that it is now relatively easy to use the Optimiser available in at least one spreadsheet (EXCEL) to calculate efficient portfolios for various levels of risk and return, both constrained and unconstrained, so as to be able to generate any number of Efficient Frontiers.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
Ulrike Heuer argues that there can be a reason for a person to perform an action that this person cannot perform, as long as this person can take efficient steps towards performing this action. In this reply, I first argue that Heuer’s examples fail to undermine my claim that there cannot be a reason for a person to perform an action if it is impossible that this person will perform this action. I then argue that, on a plausible interpretation of what ‘efficient steps’ are, Heuer’s claim is consistent with my claim. I end by showing that Heuer fails to undermine the arguments I gave for my claim.
Resumo:
This chapter covers the basic concepts of passive building design and its relevant strategies, including passive solar heating, shading, natural ventilation, daylighting and thermal mass. In environments with high seasonal peak temperatures and/or humidity (e.g. cities in temperate regions experiencing the Urban Heat Island effect), wholly passive measures may need to be supplemented with low and zero carbon technologies (LZCs). The chapter also includes three case studies: one residential, one demonstrational and one academic facility (that includes an innovative passive downdraught cooling (PDC) strategy) to illustrate a selection of passive measures.
Resumo:
The nature of private commercial real estate markets presents difficulties for monitoring market performance. Assets are heterogeneous and spatially dispersed, trading is infrequent and there is no central marketplace in which prices and cash flows of properties can be easily observed. Appraisal based indices represent one response to these issues. However, these have been criticised on a number of grounds: that they may understate volatility, lag turning points and be affected by client influence issues. Thus, this paper reports econometrically derived transaction based indices of the UK commercial real estate market using Investment Property Databank (IPD) data, comparing them with published appraisal based indices. The method is similar to that presented by Fisher, Geltner, and Pollakowski (2007) and used by Massachusett, Institute of Technology (MIT) on National Council of Real Estate Investment Fiduciaries (NCREIF) data, although it employs value rather than equal weighting. The results show stronger growth from the transaction based indices in the run up to the peak in the UK market in 2007. They also show that returns from these series are more volatile and less autocorrelated than their appraisal based counterparts, but, surprisingly, differences in turning points were not found. The conclusion then debates the applications and limitations these series have as measures of market performance.
Resumo:
There is growing pressure on the construction industry to deliver energy efficient, sustainable buildings but there is evidence to suggest that, in practice, designs regularly fail to achieve the anticipated levels of in-use energy consumption. One of the key factors behind this discrepancy is the behavior of the building occupants. This paper explores how insights from experimental psychology could potentially be used to reduce the gap between the predicted and actual energy performance of buildings. It demonstrates why traditional methods to engage with the occupants are not always successful and proposes a model for a more holistic approach to this issue. The paper concludes that achieving energy efficiency in buildings is not solely a technological issue and that the construction industry needs to adopt a more user-centred approach.
Resumo:
This article investigates the nature of enterprise pedagogy in music. It presents the results of a research project that applied the practices of enterprise learning developed in the post-compulsory music curriculum in England to the teaching of the National Curriculum for music for 11-to-14-year-olds. In doing so, the article explores the nature of enterprise learning and the nature of pedagogy, in order to consider whether enterprise pedagogy offers an effective way to teach the National Curriculum. Enterprise pedagogy was found to have a positive effect on the motivation of students and on the potential to match learning to the needs of students of different abilities. Crucially, it was found that, to be effective, not only did the teacher’s practice need to be congruent with the beliefs and theories on which it rests, but that the students also needed to share in these underlying assumptions through their learning. The study has implications for the way in which teachers work multiple pedagogies in the process of developing their pedagogical identity.
Resumo:
Foot-and-mouth disease virus (FMDV) is a significant economically and distributed globally pathogen of Artiodactyla. Current vaccines are chemically inactivated whole virus particles that require large-scale virus growth in strict bio-containment with the associated risks of accidental release or incomplete inactivation. Non-infectious empty capsids are structural mimics of authentic particles with no associated risk and constitute an alternate vaccine candidate. Capsids self-assemble from the processed virus structural proteins, VP0, VP3 and VP1, which are released from the structural protein precursor P1-2A by the action of the virus-encoded 3C protease. To date recombinant empty capsid assembly has been limited by poor expression levels, restricting the development of empty capsids as a viable vaccine. Here expression of the FMDV structural protein precursor P1-2A in insect cells is shown to be efficient but linkage of the cognate 3C protease to the C-terminus reduces expression significantly. Inactivation of the 3C enzyme in a P1-2A-3C cassette allows expression and intermediate levels of 3C activity resulted in efficient processing of the P1-2A precursor into the structural proteins which assembled into empty capsids. Expression was independent of the insect host cell background and leads to capsids that are recognised as authentic by a range of anti-FMDV bovine sera suggesting their feasibility as an alternate vaccine.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
Reaction of the 4-R-benzaldehyde thiosemicarbazones (denoted in general as L-R; R = OCH(3), CH(3), H, Cl and NO(2)) with trans-[Pd(PPh(3))(2)Cl(2)] afforded a group of mixed-ligand complexes (denoted in general as 1-R) incorporating a N,S-coordinated thiosemicarbazone. a triphenylphosphine and a chloride. Similar reaction with Na(2)[PdCl(4)] afforded a family of bis-thiosemicarbazone complexes (denoted in general as 2-R), where each ligand is N,S-coordinated. Crystal structures of 1-CH(3), 1-NO(2), 2-OCH(3), 2-NO(2) and L-NO(2) have been determined. In all the complexes the thiosemicarbazones are coordinated to the metal center, via dissociation of the acidic proton, as bidentate N,S-donors forming five-membered chelate rings. With reference to the structure of the uncoordinated thiosemicarbazone, this coordination mode is associated with a conformational change around the C=N bond. All the 1-R and 2-R complexes display intense absorptions in the visible region. Catalytic activity of the 1-R and 2-R complexes towards some C-C coupling reactions (e.g. Suzuki, Heck and Sonogashira) has been examined and while both are found to be efficient catalysts, 1-R is much better catalyst than 2-R.