973 resultados para marked based pricing
Resumo:
The main objective of this study is to analyze the role and potential of transfer pricing as a means of management control in large organizations. The special emphasis is on analyzing the potential of transfer pricing when we are motivating the profit center managers. The research approach is theoretical and literature reviews include studies about profit center organizations, performance measurement and analysis, incentive systems, transfer pricing techniques and agency theory. Based on the analysis, it seems that transfer pricing is a suitable tool for controlling, motivating and managing profit center managers. This requires that the performance measurement can be done fairly and transfer prices are set using fair assumptions. The motivating effects of transfer pricing can be enhanced if the reward system is connected to performance measurement system. In synthesis there is presented effects of transfer pricing to profit center managers behavior. There is also presented opinion about fair transfer pricing policy.
Resumo:
We conducted a survey including 3334 bloodstream infections (BSIs) due to E. coli diagnosed in 2005-2014 at a stable cohort of hospitals. Marked increases in incidence were observed for community-acquired (CA) BSIs in patients aged >75 years, CA-BSIs of digestive origin in patients aged 60-74 years, healthcare-associated BSIs, and BSIs associated with ESBL (extended-spectrum B-lactamase)-producing E. coli (ESBLEc). Using MLST, we studied the genetic diversity of 412 BSI isolates recovered during the 2014 survey: 7 major sequence type complexes (STCs) were revealed in phylogenetic group B2, 3 in group A/B1 and 2 in group D. Among the 31 ESBLEc isolates, 1/3 belonged to STC 131. We searched for possible associations between clonal groups, clinical determinants and characteristics of BSIs: isolates from groups B2 (except STC 131) and D were susceptible to antibiotics and associated with BSIs of urinary origin in patients <60 years. STC 131 and group A/B1 isolates were multi-drug resistant and associated with CA-BSIs of digestive origin in patients aged 60-74 with a recent history of antibiotic treatment. STC 131 isolates were associated with HCA-BSIs in patients with recent/present hospitalization in a long-stay unit. We provide a unique population-based picture of the epidemiology of E. coli BSI. The aging nature of the population led to an increase in the number of cases caused by the B2 and D isolates generally implicated in BSIs. In addition, the association of a trend toward increasing rates of gut colonization with multi drug-resistant isolates revealed by the rise in the incidence of BSIs of digestive origin caused by STC 131 and A/B1 (STCs 10, 23, and 155) isolates, and a significant increase in the frequency of BSIs in elderly patients with recent antibiotic treatment suggested that antibiotic use may have contributed to the growing incidence of BSI.
Resumo:
DNA is nowadays swabbed routinely to investigate serious and volume crimes, but research remains scarce when it comes to determining the criteria that may impact the success rate of DNA swabs taken on different surfaces and situations. To investigate these criteria in fully operational conditions, DNA analysis results of 4772 swabs taken by the forensic unit of a police department in Western Switzerland over a 2.5-year period (2012-2014) in volume crime cases were considered. A representative and random sample of 1236 swab analyses was extensively examined and codified, describing several criteria such as whether the swabbing was performed at the scene or in the lab, the zone of the scene where it was performed, the kind of object or surface that was swabbed, whether the target specimen was a touch surface or a biological fluid, and whether the swab targeted a single surface or combined different surfaces. The impact of each criterion and of their combination was assessed in regard to the success rate of DNA analysis, measured through the quality of the resulting profile, and whether the profile resulted in a hit in the national database or not. Results show that some situations - such as swabs taken on door and window handles for instance - have a higher success rate than average swabs. Conversely, other situations lead to a marked decrease in the success rate, which should discourage further analyses of such swabs. Results also confirm that targeting a DNA swab on a single surface is preferable to swabbing different surfaces with the intent to aggregate cells deposited by the offender. Such results assist in predicting the chance that the analysis of a swab taken in a given situation will lead to a positive result. The study could therefore inform an evidence-based approach to decision-making at the crime scene (what to swab or not) and at the triage step (what to analyse or not), contributing thus to save resource and increase the efficiency of forensic science efforts.
Resumo:
Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.
Resumo:
The aim of this thesis is to examine whether the pricing anomalies exists in the Finnish stock markets by comparing the performance of quantile portfolios that are formed on the basis of either individual valuation ratios, composite value measures or combined value and momentum indicators. All the research papers included in the thesis show evidence of value anomalies in the Finnish stock markets. In the first paper, the sample of stocks over the 1991-2006 period is divided into quintile portfolios based on four individual valuation ratios (i.e., E/P, EBITDA/EV, B/P, and S/P) and three hybrids of them (i.e. composite value measures). The results show the superiority of composite value measures as selection criterion for value stocks, particularly when EBITDA/EV is employed as earnings multiple. The main focus of the second paper is on the impact of the holding period length on performance of value strategies. As an extension to the first paper, two more individual ratios (i.e. CF/P and D/P) are included in the comparative analysis. The sample of stocks over 1993- 2008 period is divided into tercile portfolios based on six individual valuation ratios and three hybrids of them. The use of either dividend yield criterion or one of three composite value measures being examined results in best value portfolio performance according to all performance metrics used. Parallel to the findings of many international studies, our results from performance comparisons indicate that for the sample data employed, the yearly reformation of portfolios is not necessarily optimal in order to maximally gain from the value premium. Instead, the value investor may extend his holding period up to 5 years without any decrease in long-term portfolio performance. The same holds also for the results of the third paper that examines the applicability of data envelopment analysis (DEA) method in discriminating the undervalued stocks from overvalued ones. The fourth paper examines the added value of combining price momentum with various value strategies. Taking account of the price momentum improves the performance of value portfolios in most cases. The performance improvement is greatest for value portfolios that are formed on the basis of the 3-composite value measure which consists of D/P, B/P and EBITDA/EV ratios. The risk-adjusted performance can be enhanced further by following 130/30 long-short strategy in which the long position of value winner stocks is leveraged by 30 percentages while simultaneously selling short glamour loser stocks by the same amount. Average return of the long-short position proved to be more than double stock market average coupled with the volatility decrease. The fifth paper offers a new approach to combine value and momentum indicators into a single portfolio-formation criterion using different variants of DEA models. The results throughout the 1994-2010 sample period shows that the top-tercile portfolios outperform both the market portfolio and the corresponding bottom-tercile portfolios. In addition, the middle-tercile portfolios also outperform the comparable bottom-tercile portfolios when DEA models are used as a basis for stock classification criteria. To my knowledge, such strong performance differences have not been reported in earlier peer-reviewed studies that have employed the comparable quantile approach of dividing stocks into portfolios. Consistently with the previous literature, the division of the full sample period into bullish and bearish periods reveals that the top-quantile DEA portfolios lose far less of their value during the bearish conditions than do the corresponding bottom portfolios. The sixth paper extends the sample period employed in the fourth paper by one year (i.e. 1993- 2009) covering also the first years of the recent financial crisis. It contributes to the fourth paper by examining the impact of the stock market conditions on the main results. Consistently with the fifth paper, value portfolios lose much less of their value during bearish conditions than do stocks on average. The inclusion of a momentum criterion somewhat adds value to an investor during bullish conditions, but this added value turns to negative during bearish conditions. During bear market periods some of the value loser portfolios perform even better than their value winner counterparts. Furthermore, the results show that the recent financial crisis has reduced the added value of using combinations of momentum and value indicators as portfolio formation criteria. However, since the stock markets have historically been bullish more often than bearish, the combination of the value and momentum criteria has paid off to the investor despite the fact that its added value during bearish periods is negative, on an average.
Resumo:
This Master’s Thesis deals with the topic of transfer pricing documentation in Finland and China. The goal of the research is to find what kind of differences exist in a single case company’s transfer pricing documentation when following Chinese or Finnish transfer pricing regulations. The study is carried out as a case study research. The theoretical framework consists of information from different transfer pricing topics and transfer pricing documentation regulations in China and Finland. The main research material was the case company’s transfer pricing documents with the support of open discus-sion with one of the case company’s employees. The study compared the 2009 and 2010 documents. The 2009 document was done based on the Finnish method while the 2010 document was based on the Chinese documentation principles. The conclusion made is that the content of the documents was heavily similar, while the main differences come in the way the content is presented and the level of detail used in the documents.
Resumo:
This thesis investigates pricing of liquidity in the French stock market. The study covers 835 ordinary shares traded in the period of 1996-2014 on Paris Euronext. The author utilizes the Liquidity-Adjusted Capital Asset Pricing Model (LCAPM) recently developed by Acharya and Pedersen (2005) to test whether liquidity level and risks significantly affect stock returns. Three different liquidity measures – Amihud, FHT, and PQS – are incorporated into the model to find any difference between the results they could provide. It appears that the findings largely depend on the liquidity measure used. In general the results exhibit more evidence for insignificant influence of liquidity level and risks as well as market risk on stock returns. The similar conclusion was reported earlier by Lee (2011) for several regions, including France. This finding of the thesis, however, is not consistent across all the liquidity measures. Nevertheless, the difference in the results between these measures provides new insight to the existing literature on this topic. The Amihud-based findings might indicate that market resiliency is not priced in the French stock market. At the same time the contradicting results from FHT and PQS provide some foundation for the hypothesis that one of two leftover liquidity dimensions – market depth or breadth – could significantly affect stock returns. Therefore, the thesis’ findings suggest a conjecture that different liquidity dimensions have different impacts on stock returns.
Resumo:
Potential impacts of electrical capacity market design on capacity mobility and end use customer pricing are analyzed. Market rules and historical evolution are summarized to provide a background for the analysis. The summarized rules are then examined for impacts on capacity mobility. A summary of the aspects of successful capacity markets is provided. Two United States market regions are chosen for analysis based upon their market history and proximity to each other. The MISO region is chosen due to recent developments in capacity market mechanisms. The PJM region neighbors the MISO region and is similar in size and makeup. The PJM region has had a capacity market mechanism for over a decade and allows for a controlled comparison of the MISO region’s developments. Capacity rules are found to have an impact on the mobility of capacity between regions. Regulatory restrictions and financial penalties for the movement of capacity between regions are found which effectively hinder such mobility. Capacity market evolution timelines are formed from the historical evolution previously summarized and compared to historical pricing to inspect for a correlation. No direct and immediate impact on end use customer pricing was found due to capacity market design. The components of end use customer pricing are briefly examined.
Resumo:
Nitric oxide (NO) donors produce NO-related activity when applied to biological systems. Among its diverse functions, NO has been implicated in vascular smooth muscle relaxation. Despite the great importance of NO in biological systems, its pharmacological and physiological studies have been limited due to its high reactivity and short half-life. In this review we will focus on our recent investigations of nitrosyl ruthenium complexes as NO-delivery agents and their effects on vascular smooth muscle cell relaxation. The high affinity of ruthenium for NO is a marked feature of its chemistry. The main signaling pathway responsible for the vascular relaxation induced by NO involves the activation of soluble guanylyl-cyclase, with subsequent accumulation of cGMP and activation of cGMP-dependent protein kinase. This in turn can activate several proteins such as K+ channels as well as induce vasodilatation by a decrease in cytosolic Ca2+. Oxidative stress and associated oxidative damage are mediators of vascular damage in several cardiovascular diseases, including hypertension. The increased production of the superoxide anion (O2-) by the vascular wall has been observed in different animal models of hypertension. Vascular relaxation to the endogenous NO-related response or to NO released from NO deliverers is impaired in vessels from renal hypertensive (2K-1C) rats. A growing amount of evidence supports the possibility that increased NO inactivation by excess O2- may account for the decreased NO bioavailability and vascular dysfunction in hypertension.
Resumo:
To investigate signal regulation models of gastric cancer, databases and literature were used to construct the signaling network in humans. Topological characteristics of the network were analyzed by CytoScape. After marking gastric cancer-related genes extracted from the CancerResource, GeneRIF, and COSMIC databases, the FANMOD software was used for the mining of gastric cancer-related motifs in a network with three vertices. The significant motif difference method was adopted to identify significantly different motifs in the normal and cancer states. Finally, we conducted a series of analyses of the significantly different motifs, including gene ontology, function annotation of genes, and model classification. A human signaling network was constructed, with 1643 nodes and 5089 regulating interactions. The network was configured to have the characteristics of other biological networks. There were 57,942 motifs marked with gastric cancer-related genes out of a total of 69,492 motifs, and 264 motifs were selected as significantly different motifs by calculating the significant motif difference (SMD) scores. Genes in significantly different motifs were mainly enriched in functions associated with cancer genesis, such as regulation of cell death, amino acid phosphorylation of proteins, and intracellular signaling cascades. The top five significantly different motifs were mainly cascade and positive feedback types. Almost all genes in the five motifs were cancer related, including EPOR,MAPK14, BCL2L1, KRT18,PTPN6, CASP3, TGFBR2,AR, and CASP7. The development of cancer might be curbed by inhibiting signal transductions upstream and downstream of the selected motifs.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.