819 resultados para Efficient welding
Resumo:
Markowitz showed that assets can be combined to produce an 'Efficient' portfolio that will give the highest level of portfolio return for any level of portfolio risk, as measured by the variance or standard deviation. These portfolios can then be connected to generate what is termed an 'Efficient Frontier' (EF). In this paper we discuss the calculation of the Efficient Frontier for combinations of assets, again using the spreadsheet Optimiser. To illustrate the derivation of the Efficient Frontier, we use the data from the Investment Property Databank Long Term Index of Investment Returns for the period 1971 to 1993. Many investors might require a certain specific level of holding or a restriction on holdings in at least some of the assets. Such additional constraints may be readily incorporated into the model to generate a constrained EF with upper and/or lower bounds. This can then be compared with the unconstrained EF to see whether the reduction in return is acceptable. To see the effect that these additional constraints may have, we adopt a fairly typical pension fund profile, with no more than 20% of the total held in Property. The paper shows that it is now relatively easy to use the Optimiser available in at least one spreadsheet (EXCEL) to calculate efficient portfolios for various levels of risk and return, both constrained and unconstrained, so as to be able to generate any number of Efficient Frontiers.
Resumo:
Ulrike Heuer argues that there can be a reason for a person to perform an action that this person cannot perform, as long as this person can take efficient steps towards performing this action. In this reply, I first argue that Heuer’s examples fail to undermine my claim that there cannot be a reason for a person to perform an action if it is impossible that this person will perform this action. I then argue that, on a plausible interpretation of what ‘efficient steps’ are, Heuer’s claim is consistent with my claim. I end by showing that Heuer fails to undermine the arguments I gave for my claim.
Resumo:
This chapter covers the basic concepts of passive building design and its relevant strategies, including passive solar heating, shading, natural ventilation, daylighting and thermal mass. In environments with high seasonal peak temperatures and/or humidity (e.g. cities in temperate regions experiencing the Urban Heat Island effect), wholly passive measures may need to be supplemented with low and zero carbon technologies (LZCs). The chapter also includes three case studies: one residential, one demonstrational and one academic facility (that includes an innovative passive downdraught cooling (PDC) strategy) to illustrate a selection of passive measures.
Resumo:
There is growing pressure on the construction industry to deliver energy efficient, sustainable buildings but there is evidence to suggest that, in practice, designs regularly fail to achieve the anticipated levels of in-use energy consumption. One of the key factors behind this discrepancy is the behavior of the building occupants. This paper explores how insights from experimental psychology could potentially be used to reduce the gap between the predicted and actual energy performance of buildings. It demonstrates why traditional methods to engage with the occupants are not always successful and proposes a model for a more holistic approach to this issue. The paper concludes that achieving energy efficiency in buildings is not solely a technological issue and that the construction industry needs to adopt a more user-centred approach.
Resumo:
Foot-and-mouth disease virus (FMDV) is a significant economically and distributed globally pathogen of Artiodactyla. Current vaccines are chemically inactivated whole virus particles that require large-scale virus growth in strict bio-containment with the associated risks of accidental release or incomplete inactivation. Non-infectious empty capsids are structural mimics of authentic particles with no associated risk and constitute an alternate vaccine candidate. Capsids self-assemble from the processed virus structural proteins, VP0, VP3 and VP1, which are released from the structural protein precursor P1-2A by the action of the virus-encoded 3C protease. To date recombinant empty capsid assembly has been limited by poor expression levels, restricting the development of empty capsids as a viable vaccine. Here expression of the FMDV structural protein precursor P1-2A in insect cells is shown to be efficient but linkage of the cognate 3C protease to the C-terminus reduces expression significantly. Inactivation of the 3C enzyme in a P1-2A-3C cassette allows expression and intermediate levels of 3C activity resulted in efficient processing of the P1-2A precursor into the structural proteins which assembled into empty capsids. Expression was independent of the insect host cell background and leads to capsids that are recognised as authentic by a range of anti-FMDV bovine sera suggesting their feasibility as an alternate vaccine.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
Reaction of the 4-R-benzaldehyde thiosemicarbazones (denoted in general as L-R; R = OCH(3), CH(3), H, Cl and NO(2)) with trans-[Pd(PPh(3))(2)Cl(2)] afforded a group of mixed-ligand complexes (denoted in general as 1-R) incorporating a N,S-coordinated thiosemicarbazone. a triphenylphosphine and a chloride. Similar reaction with Na(2)[PdCl(4)] afforded a family of bis-thiosemicarbazone complexes (denoted in general as 2-R), where each ligand is N,S-coordinated. Crystal structures of 1-CH(3), 1-NO(2), 2-OCH(3), 2-NO(2) and L-NO(2) have been determined. In all the complexes the thiosemicarbazones are coordinated to the metal center, via dissociation of the acidic proton, as bidentate N,S-donors forming five-membered chelate rings. With reference to the structure of the uncoordinated thiosemicarbazone, this coordination mode is associated with a conformational change around the C=N bond. All the 1-R and 2-R complexes display intense absorptions in the visible region. Catalytic activity of the 1-R and 2-R complexes towards some C-C coupling reactions (e.g. Suzuki, Heck and Sonogashira) has been examined and while both are found to be efficient catalysts, 1-R is much better catalyst than 2-R.
Resumo:
A theory of the allocation of producer levies earmarked for downstream promotion is developed and applied to quarterly series (1970:2–1988:4) on red-meats advertising by the Australian Meat and Live-stock Corporation. Robust inferences about program efficiency are contained in the coefficients of changes in promotion effort regressed against movements in farm price and quantity. Empirical evidence of program efficiency is inconclusive. While the deeper issue of efficient disbursement of funds remains an open question, there is evidence, at least, of efficient taxation.
Resumo:
With the fast development of the Internet, wireless communications and semiconductor devices, home networking has received significant attention. Consumer products can collect and transmit various types of data in the home environment. Typical consumer sensors are often equipped with tiny, irreplaceable batteries and it therefore of the utmost importance to design energy efficient algorithms to prolong the home network lifetime and reduce devices going to landfill. Sink mobility is an important technique to improve home network performance including energy consumption, lifetime and end-to-end delay. Also, it can largely mitigate the hot spots near the sink node. The selection of optimal moving trajectory for sink node(s) is an NP-hard problem jointly optimizing routing algorithms with the mobile sink moving strategy is a significant and challenging research issue. The influence of multiple static sink nodes on energy consumption under different scale networks is first studied and an Energy-efficient Multi-sink Clustering Algorithm (EMCA) is proposed and tested. Then, the influence of mobile sink velocity, position and number on network performance is studied and a Mobile-sink based Energy-efficient Clustering Algorithm (MECA) is proposed. Simulation results validate the performance of the proposed two algorithms which can be deployed in a consumer home network environment.
Resumo:
Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element
Resumo:
Power delivery for biomedical implants is a major consideration in their design for both measurement and stimulation. When performed by a wireless technique, transmission efficiency is critically important not only because of the costs associated with any losses but also because of the nature of those losses, for example, excessive heat can be uncomfortable for the individual involved. In this study, a method and means of wireless power transmission suitable for biomedical implants are both discussed and experimentally evaluated. The procedure initiated is comparable in size and simplicity to those methods already employed; however, some of Tesla’s fundamental ideas have been incorporated in order to obtain a significant improvement in efficiency. This study contains a theoretical basis for the approach taken; however, the emphasis here is on practical experimental analysis