920 resultados para Optimal switch allocation
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.
Resumo:
For my Licentiate thesis, I conducted research on risk measures. Continuing with this research, I now focus on capital allocation. In the proportional capital allocation principle, the choice of risk measure plays a very important part. In the chapters Introduction and Basic concepts, we introduce three definitions of economic capital, discuss the purpose of capital allocation, give different viewpoints of capital allocation and present an overview of relevant literature. Risk measures are defined and the concept of coherent risk measure is introduced. Examples of important risk measures are given, e. g., Value at Risk (VaR), Tail Value at Risk (TVaR). We also discuss the implications of dependence and review some important distributions. In the following chapter on Capital allocation we introduce different principles for allocating capital. We prefer to work with the proportional allocation method. In the following chapter, Capital allocation based on tails, we focus on insurance business lines with heavy-tailed loss distribution. To emphasize capital allocation based on tails, we define the following risk measures: Conditional Expectation, Upper Tail Covariance and Tail Covariance Premium Adjusted (TCPA). In the final chapter, called Illustrative case study, we simulate two sets of data with five insurance business lines using Normal copulas and Cauchy copulas. The proportional capital allocation is calculated using TCPA as risk measure. It is compared with the result when VaR is used as risk measure and with covariance capital allocation. In this thesis, it is emphasized that no single allocation principle is perfect for all purposes. When focusing on the tail of losses, the allocation based on TCPA is a good one, since TCPA in a sense includes features of TVaR and Tail covariance.
Resumo:
Hammaslääketieteessä käytetettävien komposiittien valonläpäisevyys vaihtelee. Samoin LED-valokovettimet eroavat toisistaan valotehonsa ja muotoilunsa perusteella. On yleisesti tiedossa, että valokovettimesta tulevan valon intensiteetti pinta-alayksikköä kohden heikkenee, kun kovettimen etäisyys kasvaa. Toisaalta ei ole tiedossa, miten valokovetettavan kohteen ja valokovettimen kärjen väliin sijoitettu materiaali tarkalleenottaen vaikuttaa valon intensiteettiin eri etäisyyksiä käytettäessä. Tämän tutkimuksen tarkoituksena on selvittää, miten valokovetettavan kohteen ja valokovettimen kärjen väliin asetettava etukäteen polymerisoitu materiaali vaikuttaa valon intensiteettiin eri etäisyyksillä. Tutkimus suoritettiin käyttämällä kahta eri valokovetinta. Jotta etäisyyden vaikutusta valotustehoon voitiin demonstroida, vaihdettiin kovettimen etäisyyttä sensorista 0,2,4,6,8,10mm välillä. Valotehot rekisteröitiin MARC resin calibrator -laitteella. Sensorin ja valokovettimen kärjen väliin asetettavat erilaiset komposiittilevyt olivat valmiiksi kovetettuja,1mm paksuisia, filleripitoisuuksiltaan neljää erilaista muovia. Valotehot rekisteröitiin jokaiselta etäisyydeltä komposiitin ollessa sensorin päällä. Rinnakkaisesti verrattiin myös etäisyyden vaikutusta valotehoon ilman esikovetettua materiaalia kovettimen kärjen ja valoa mittaavan sensorin välissä. Vertailun suorittamiseksi laskettiin intensiteettisuhdeluku muovillisen ja muovittoman arvon välillä aina tietyllä etäisyydellä Valokovettimen kärjen etäisyyden kasvattaminen sensorista (eli valokovetettavasta kohteesta) odotusten mukaisesti pienensi valotehoa. Laittamalla sensorin ja kovettimen väliin komposiittilevy, valoteho pieneni odotetusti vielä enemmän. Tutkittaessa intensiteettisuhdetta (valoteho muovin kanssa : valoteho ilman muovia) kuitenkin huomattiin, että 4-6mm:n kohdalla suhdeluku oli suurempi kuin 0,2,8 ja 10mm kohdalla. Johtopäätöksenä oli, että suurin mahdollinen valokovetusteho saavutetan laittamalla kovetuskärki mahdollisimman lähelle kohdetta. Jos valokovetettavan kohteen ja valokovettimen kärjen välissä oli kiinteä komposiittipalanen, suurin mahdollinen valokovetusteho kohteeseen saavutetaan edelleen laittamalla kovetuskärki kiinni muoviin. Jos etäisyyttä muovin pinnasta sen sijaan kasvatettiin, valokovetusteho ei laskenutkaan niin nopeasti kuin oli odotettu. Tämä voi liittyä siihen, että tehokkaan valokeilan halkaisijan koko on suurempi verrattuna komposiitin sekä sensorin halkaisian kokoon. Toiseksi on arvioitu, että resiinikomposiitin täyteaineet voisivat fokusoida läpi kulkevaa valoa sensoriin. Se, pitääkö tämä ilmiö paikkansa, vaatii kuitenkin enemmän tutkimusta
Resumo:
The present world energy production is heavily relying on the combustion of solid fuels like coals, peat, biomass, municipal solid waste, whereas the share of renewable fuels is anticipated to increase in the future to mitigate climate change. In Finland, peat and wood are widely used for energy production. In any case, the combustion of solid fuels results in generation of several types of thermal conversion residues, such as bottom ash, fly ash, and boiler slag. The predominant residue type is determined by the incineration technology applied, while its composition is primarily relevant to the composition of fuels combusted. An extensive research has been conducted on technical suitability of ash for multiple recycling methods. Most of attention was drawn to the recycling of the coal combustion residues, as coal is the primary solid fuel consumed globally. The recycling methods of coal residues include utilization in a cement industry, in concrete manufacturing, and mine backfilling, to name few. Biomass combustion residues were also studied to some extent with forest fertilization, road construction, and road stabilization being the predominant utilization options. Lastly, residues form municipal solid waste incineration attracted more attention recently following the growing number of waste incineration plants globally. The recycling methods of waste incineration residues are the most limited due to its hazardous nature and varying composition, and include, among others, landfill construction, road construction, mine backfilling. In the study, environmental and economic aspects of multiple recycling options of thermal conversion residues generated within a case-study area were studied. The case-study area was South-East Finland. The environmental analysis was performed using an internationally recognized methodology — life cycle assessment. Economic assessment was conducted applying a widely used methodology — cost-benefit analysis. Finally, the results of the analyses were combined to enable easier comparison of the recycling methods. The recycling methods included the use of ash in forest fertilization, road construction, road stabilization, and landfill construction. Ash landfilling was set as a baseline scenario. Quantitative data about the amounts of ash generated and its composition was obtained from companies, their environmental reports, technical reports and other previously published literature. Overall, the amount of ash in the case-study area was 101 700 t. However, the data about 58 400 t of fly ash and 35 100 t of bottom ash and boiler slag were included in the study due to lack of data about leaching of heavy metals in some cases. The recycling methods were modelled according to the scientific studies published previously. Overall, the results of the study indicated that ash utilization for fertilization and neutralization of 17 600 ha of forest was the most economically beneficial method, which resulted in the net present value increase by 58% compared to ash landfilling. Regarding the environmental impact, the use of ash in the construction of 11 km of roads was the most attractive method with decreased environmental impact of 13% compared to ash landfilling. The least preferred method was the use of ash for landfill construction since it only enabled 11% increase of net present value, while inducing additional 1% of negative impact on the environment. Therefore, a following recycling route was proposed in the study. Where possible and legally acceptable, recycle fly and bottom ash for forest fertilization, which has strictest requirements out of all studied methods. If the quality of fly ash is not suitable for forest fertilization, then it should be utilized, first, in paved road construction, second, in road stabilization. Bottom ash not suitable for forest fertilization, as well as boiler slag, should be used in landfill construction. Landfilling should only be practiced when recycling by either of the methods is not possible due to legal requirements or there is not enough demand on the market. Current demand on ash and possible changes in the future were assessed in the study. Currently, the area of forest fertilized in the case-study are is only 451 ha, whereas about 17 600 ha of forest could be fertilized with ash generated in the region. Provided that the average forest fertilizing values in Finland are higher and the area treated with fellings is about 40 000 ha, the amount of ash utilized in forest fertilization could be increased. Regarding road construction, no new projects launched by the Center of Economic Development, Transport and the Environment in the case-study area were identified. A potential application can be found in the construction of private roads. However, no centralized data about such projects is available. The use of ash in stabilization of forest roads is not expected to increased in the future with a current downwards trend in the length of forest roads built. Finally, the use of ash in landfill construction is not a promising option due to the reducing number of landfills in operation in Finland.
Resumo:
Optimal challenge occurs when an individual perceives the challenge of the task to be equaled or matched by his or her own skill level (Csikszentmihalyi, 1990). The purpose of this study was to test the impact of the OPTIMAL model on physical education students' motivation and perceptions of optimal challenge across four games categories (i. e. target, batting/fielding, net/wall, invasion). Enjoyment, competence, student goal orientation and activity level were examined in relation to the OPTIMAL model. A total of 22 (17 M; 5 F) students and their parents provided informed consent to take part in the study and were taught four OPTIMAL lessons and four non-OPTIMAL lessons ranging across the four different games categories by their own teacher. All students completed the Task and Ego in Sport Questionnaire (TEOSQ; Duda & Whitehead, 1998), the Intrinsic Motivation Inventory (IMI; McAuley, Duncan, & Tanmien, 1987) and the Children's Perception of Optimal Challenge Instrument (CPOCI; Mandigo, 2001). Sixteen students (two each lesson) were observed by using the System for Observing Fitness Instruction Time tool (SOFTT; McKenzie, 2002). As well, they participated in a structured interview which took place after each lesson was completed. Quantitative results concluded that no overall significant difference was found in motivational outcomes when comparing OPTIMAL and non-OPTIMAL lessons. However, when the lessons were broken down into games categories, significant differences emerged. Levels of perceived competence were found to be higher in non-OPTIMAL batting/fielding lessons compared to OPTIMAL lessons, whereas levels of enjoyment and perceived competence were found to be higher in OPTIMAL invasion lessons in comparison to non-OPTIMAL invasion lessons. Qualitative results revealed significance in feehngs of skill/challenge balance, enjoyment and competence in the OPTIMAL lessons. Moreover, a significance of practically twice the active movement time percentage was found in OPTIMAL lessons in comparison to non-OPTIMAL lessons.
Resumo:
Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.
Resumo:
Hub Location Problems play vital economic roles in transportation and telecommunication networks where goods or people must be efficiently transferred from an origin to a destination point whilst direct origin-destination links are impractical. This work investigates the single allocation hub location problem, and proposes a genetic algorithm (GA) approach for it. The effectiveness of using a single-objective criterion measure for the problem is first explored. Next, a multi-objective GA employing various fitness evaluation strategies such as Pareto ranking, sum of ranks, and weighted sum strategies is presented. The effectiveness of the multi-objective GA is shown by comparison with an Integer Programming strategy, the only other multi-objective approach found in the literature for this problem. Lastly, two new crossover operators are proposed and an empirical study is done using small to large problem instances of the Civil Aeronautics Board (CAB) and Australian Post (AP) data sets.
Resumo:
The first and rate-limiting step of lipolysis is the removal of the first fatty acid from a triglyceride molecule; it is catalyzed by adipose triglyceride lipase (ATGL). ATGL is co-activated by comparative gene identification-58 (CGI-58) and inhibited by the G(0)/G(1) switch gene-2 protein (G0S2). G0S2 has also recently been identified as a positive regulator of oxidative phosphorylation within the mitochondria. Previous research has demonstrated in cell culture, a dose dependent mechanism for inhibition by G0S2 on ATGL. However our data is not consistent with this hypothesis. There was no change in G0S2 protein content during an acute lipolytic inducing set of contractions in both whole muscle, and isolated mitochondria yet both ATGL and G0S2 increase following endurance training, in spite of the fact that there should be increased reliance on intramuscular lipolysis. Therefore, inhibition of ATGL by G0S2 appears to be regulated through more complicated intracellular or post-translation regulation.
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.
Resumo:
Rapport de recherche
Resumo:
We study fairness in economies with one private good and one partially excludable nonrival good. A social ordering function determines for each profile of preferences an ordering of all conceivable allocations. We propose the following Free Lunch Aversion condition: if the private good contributions of two agents consuming the same quantity of the nonrival good have opposite signs, reducing that gap improves social welfare. This condition, combined with the more standard requirements of Unanimous Indifference and Responsiveness, delivers a form of welfare egalitarianism in which an agent's welfare at an allocation is measured by the quantity of the nonrival good that, consumed at no cost, would leave her indifferent to the bundle she is assigned.