953 resultados para the pay-off method
Resumo:
Real option valuation, in particular the fuzzy pay-off method, has proven to be useful in defining risk and visualizing imprecision of investments in various industry applications. This study examines whether the evaluation of risk and profitability for public real estate investments can be improved by using real option methodology. Firstly, the context of real option valuation in the real estate industry is examined. Further, an empirical case study is performed on 30 real estate investments of a Finnish government enterprise in order to determine whether the presently used investment analysis system can be complemented by the pay-off method. Despite challenges in the application of the pay-off method to the case company’s large investment base, real option valuation is found to create additional value and facilitate more robust risk analysis in public real estate applications.
Resumo:
The aim of this paper is to measure the returns to human capital. We use a unique data set consisting of matched employer-employee information. Data on individuals' human capital include a set of 26 competences that capture the utilization of workers' skills in a very detailed way. Thus, we can expand the concept of human capital and discuss the type of skills that are more productive in the workplace and, hence, generate a higher payoff for the workers. The rich information on firm's and workplace characteristics allows us to introduce a broad range of controls and to improve previous research in this field. This paper gives evidence that the returns to generic competences differ depending on the position of the worker in the firm. Only numeracy skills are reward independent of the occupational status of the worker. The level of technology used by the firm in the production process does not directly increase workers’ pay, but it influences the pay-off to some of the competences. JEL Classification: J24, J31
Resumo:
Tämän Pro Gradu-tutkielman tavoitteena on muodostaa työkalu Lean-projektin kannattavuuden ennalta-arvioinnin toteuttamiseen soveltamalla tuottojakauma-menetelmää. Lisäksi tutkimus pyrkii selvittämään, minkälaista siihen liittyvää akateemista tutkimusta on aikaisemmin toteutettu sekä mitä haasteita tämänkaltaisen arvion toteuttamiselle on. Tutkimuksen syntymistä on motivoinut Lean-pro-jektien kannattavuuden ennalta-arvioimisen akateemisesta tutkimuksesta tunnistettu tutkimusaukko. Empiirinen tutkimus on toteutettu kvalitatiivisena tapaustutkimuksena, yhteistyössä Lean-projekteihin erikoistuneen konsultointiyrityksen kanssa. Empiiristä tutkimusta on ohjannut sille valittu metodologia, jonka tavoitteena on ollut systemaattisesti muodostaa tutkimuksen tavoitteen mukainen työkalu. Aineistonkeruumenetelmänä on toiminut teemahaastattelu, joka on toteutettu kaksiosaisena. Niiden pohjalta saadut aineistot on analysoitu Grounded theory-menetelmää käyttäen. Tutkimuksen tulokset osoittavat, että muodostetulla tuottojakauma-menetelmää soveltavalla työkalulla on mahdollista toteuttaa Lean-pro¬jektin kannattavuuden ennalta-arviointi. Tulosten perusteella, sen avulla pystytään myös vastaamaan tutkimuksessa tunnistettuihin haasteisiin, jotka ovat aikaisemmin rajoittaneet tämän arvion toteuttamista. Työkalulla on mahdollista, tutkimuksen perusteella, myös tukea sen yhteistyöyrityksen Lean-projektien myyntiä.
Resumo:
Voting is fundamental for democracy, however, this decisive democratic act requires quite an effort. Decision making at elections depends largely on the interest to gather information about candidates and parties, the effort to process the information at hand and the motivation to reach a vote choice. Especially in electoral systems with highly fragmented party systems and hundreds of candidates running for office, the process of decision making in the pre‐election sphere is highly demanding. In the age of information and communication technologies, new possibilities for gathering and processing such information are available. Voting Advice Applications (VAAs) provide guidance to voters prior to the act of voting and assist voters in choosing between different candidates and parties on the basis of issue congruence. Meanwhile widely used all over the world, scientific inquiry into the effect of such tools on electoral behavior is ongoing. This paper adds to the current debate by focusing on whether the popularity of candidates on the Swiss VAA smartvote eventually paid off at the 2007 Swiss federal elections and whether there is a direct link between the performance of a candidate on the tool and his or her electoral performance.
Resumo:
In many languages, masculine forms (e.g., German Lehrer, “teachers, masc.”) have traditionally been used to refer to both women and men, although feminine forms are available, too. Feminine-masculine word pairs (e.g., German Lehrerinnen und Lehrer, “teachers, fem. and teachers, masc.”) are recommended as gender-fair alternatives. A large body of empirical research documents that the use of gender-fair forms instead of masculine forms has a substantial impact on mental representations. Masculine forms activate more male representations even when used in a generic sense, whereas word pairs (e.g., German Lehrerinnen und Lehrer, “teachers, fem. and teachers, masc.”) lead to a higher cognitive inclusion of women (i.e., visibility of women). Some recent studies, however, have also shown that in a professional context word pairs may be associated with lesser status. The present research is the first to investigate both effects within a single paradigm. A cross-linguistic (Italian and German) study with 391 participants shows that word pairs help to avoid a male bias in the gender-typing of professions and increase women's visibility; at the same time, they decrease the estimated salaries of typically feminine professions (but do not affect perceived social status or competence). This potential payoff has implications for language policies aiming at gender-fairness.
Resumo:
This thesis presents an analysis of recently enacted Russian renewable energy policy based on capacity mechanism. Considering its novelty and poor coverage by academic literature, the aim of the thesis is to analyze capacity mechanism influence on investors’ decision-making process. The current research introduces a number of approaches to investment analysis. Firstly, classical financial model was built with Microsoft Excel® and crisp efficiency indicators such as net present value were determined. Secondly, sensitivity analysis was performed to understand different factors influence on project profitability. Thirdly, Datar-Mathews method was applied that by means of Monte Carlo simulation realized with Matlab Simulink®, disclosed all possible outcomes of investment project and enabled real option thinking. Fourthly, previous analysis was duplicated by fuzzy pay-off method with Microsoft Excel®. Finally, decision-making process under capacity mechanism was illustrated with decision tree. Capacity remuneration paid within 15 years is calculated individually for each RE project as variable annuity that guarantees a particular return on investment adjusted on changes in national interest rates. Analysis results indicate that capacity mechanism creates a real option to invest in renewable energy project by ensuring project profitability regardless of market conditions if project-internal factors are managed properly. The latter includes keeping capital expenditures within set limits, production performance higher than 75% of target indicators, and fulfilling localization requirement, implying producing equipment and services within the country. Occurrence of real option shapes decision-making process in the following way. Initially, investor should define appropriate location for a planned power plant where high production performance can be achieved, and lock in this location in case of competition. After, investor should wait until capital cost limit and localization requirement can be met, after that decision to invest can be made without any risk to project profitability. With respect to technology kind, investment into solar PV power plant is more attractive than into wind or small hydro power, since it has higher weighted net present value and lower standard deviation. However, it does not change decision-making strategy that remains the same for each technology type. Fuzzy pay-method proved its ability to disclose the same patterns of information as Monte Carlo simulation. Being effective in investment analysis under uncertainty and easy in use, it can be recommended as sufficient analytical tool to investors and researchers. Apart from described results, this thesis contributes to the academic literature by detailed description of capacity price calculation for renewable energy that was not available in English before. With respect to methodology novelty, such advanced approaches as Datar-Mathews method and fuzzy pay-off method are applied on the top of investment profitability model that incorporates capacity remuneration calculation as well. Comparison of effects of two different RE supporting schemes, namely Russian capacity mechanism and feed-in premium, contributes to policy comparative studies and exhibits useful inferences for researchers and policymakers. Limitations of this research are simplification of assumptions to country-average level that restricts our ability to analyze renewable energy investment region wise and existing limitation of the studying policy to the wholesale power market that leaves retail markets and remote areas without our attention, taking away medium and small investment into renewable energy from the research focus. Elimination of these limitations would allow creating the full picture of Russian renewable energy investment profile.
Resumo:
This paper presents the method and findings of a contingent valuation (CV) study that aimed to elicit United Kingdom citizens' willingness to pay to support legislation to phase out the use of battery cages for egg production in the European Union (EU). The method takes account of various biases associated with the CV technique, including 'warm glow', 'part-whole' and sample response biases. Estimated mean willingness to pay to support the legislation is used to estimate the annual benefit of the legislation to UK citizens. This is compared with the estimated annual costs of the legislation over a 12-year period, which allows for readjustment by the UK egg industry. The analysis shows that the estimated benefits of the legislation outweigh the costs. The study demonstrates that CV is a potentially useful technique for assessing the likely benefits associated with proposed legislation. However, estimates of CV studies must be treated with caution. It is important that they are derived from carefully designed surveys and that the willingness to pay estimation method allows for various biases. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.
Resumo:
The bare nucleus S(E) factors for the (2)H(d, p)(3)H and (2)H(d.n)(3)He reactions have been measured for the first time via the Trojan Horse Method off the proton in (3)He from 1.5 MeV down to 2 key. This range overlaps with the relevant region for Standard Big Bang Nucleosynthesis as well as with the thermal energies of future fusion reactors and deuterium burning in the Pre-Main-Sequence phase of stellar evolution. This is the first pioneering experiment in quasi free regime where the charged spectator is detected. Both the energy dependence and the absolute value of the S(E) factors deviate by more than 15% from available direct data with new S(0) values of 57.4 +/- 1.8 MeVb for (3)H + p and 60.1 +/- 1.9 MeV b for (3)He + n. None of the existing fitting curves is able to provide the correct slope of the new data in the full range, thus calling for a revision of the theoretical description. This has consequences in the calculation of the reaction rates with more than a 25% increase at the temperatures of future fusion reactors. (C) 2011 Elsevier By. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Gauging the maximum willingness to pay (WTP) of a product accurately is a critical success factor that determines not only market performance but also financial results. A number of approaches have therefore been developed to accurately estimate consumers’ willingness to pay. Here, four commonly used measurement approaches are compared using real purchase data as a benchmark. The relative strengths of each method are analyzed on the basis of statistical criteria and, more importantly, on their potential to predict managerially relevant criteria such as optimal price, quantity and profit. The results show a slight advantage of incentive-aligned approaches though the market settings need to be considered to choose the best-fitting procedure.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^