862 resultados para Market-based reforms
Resumo:
The aim of this thesis is to examine whether the pricing anomalies exists in the Finnish stock markets by comparing the performance of quantile portfolios that are formed on the basis of either individual valuation ratios, composite value measures or combined value and momentum indicators. All the research papers included in the thesis show evidence of value anomalies in the Finnish stock markets. In the first paper, the sample of stocks over the 1991-2006 period is divided into quintile portfolios based on four individual valuation ratios (i.e., E/P, EBITDA/EV, B/P, and S/P) and three hybrids of them (i.e. composite value measures). The results show the superiority of composite value measures as selection criterion for value stocks, particularly when EBITDA/EV is employed as earnings multiple. The main focus of the second paper is on the impact of the holding period length on performance of value strategies. As an extension to the first paper, two more individual ratios (i.e. CF/P and D/P) are included in the comparative analysis. The sample of stocks over 1993- 2008 period is divided into tercile portfolios based on six individual valuation ratios and three hybrids of them. The use of either dividend yield criterion or one of three composite value measures being examined results in best value portfolio performance according to all performance metrics used. Parallel to the findings of many international studies, our results from performance comparisons indicate that for the sample data employed, the yearly reformation of portfolios is not necessarily optimal in order to maximally gain from the value premium. Instead, the value investor may extend his holding period up to 5 years without any decrease in long-term portfolio performance. The same holds also for the results of the third paper that examines the applicability of data envelopment analysis (DEA) method in discriminating the undervalued stocks from overvalued ones. The fourth paper examines the added value of combining price momentum with various value strategies. Taking account of the price momentum improves the performance of value portfolios in most cases. The performance improvement is greatest for value portfolios that are formed on the basis of the 3-composite value measure which consists of D/P, B/P and EBITDA/EV ratios. The risk-adjusted performance can be enhanced further by following 130/30 long-short strategy in which the long position of value winner stocks is leveraged by 30 percentages while simultaneously selling short glamour loser stocks by the same amount. Average return of the long-short position proved to be more than double stock market average coupled with the volatility decrease. The fifth paper offers a new approach to combine value and momentum indicators into a single portfolio-formation criterion using different variants of DEA models. The results throughout the 1994-2010 sample period shows that the top-tercile portfolios outperform both the market portfolio and the corresponding bottom-tercile portfolios. In addition, the middle-tercile portfolios also outperform the comparable bottom-tercile portfolios when DEA models are used as a basis for stock classification criteria. To my knowledge, such strong performance differences have not been reported in earlier peer-reviewed studies that have employed the comparable quantile approach of dividing stocks into portfolios. Consistently with the previous literature, the division of the full sample period into bullish and bearish periods reveals that the top-quantile DEA portfolios lose far less of their value during the bearish conditions than do the corresponding bottom portfolios. The sixth paper extends the sample period employed in the fourth paper by one year (i.e. 1993- 2009) covering also the first years of the recent financial crisis. It contributes to the fourth paper by examining the impact of the stock market conditions on the main results. Consistently with the fifth paper, value portfolios lose much less of their value during bearish conditions than do stocks on average. The inclusion of a momentum criterion somewhat adds value to an investor during bullish conditions, but this added value turns to negative during bearish conditions. During bear market periods some of the value loser portfolios perform even better than their value winner counterparts. Furthermore, the results show that the recent financial crisis has reduced the added value of using combinations of momentum and value indicators as portfolio formation criteria. However, since the stock markets have historically been bullish more often than bearish, the combination of the value and momentum criteria has paid off to the investor despite the fact that its added value during bearish periods is negative, on an average.
Resumo:
The thesis examines the profitability of DMAC trading rules in the Finnish stock market over the 1996-2012 period. It contributes to the existing technical analysis literature by comparing for the first time the performance of DMAC strategies based on individual stock trading portfolios to the performance of index trading strategies based on the trading on the index (OMX Helsinki 25) that consists of the same stocks. Besides, the market frictions including transaction costs and taxes are taken into account, and the results are reported from both institutional and individual investor’s perspective. Performance characteristic of DMAC rules are evaluated by simulating 19,900 different trading strategies in total for two non- overlapping 8-year sub-periods, and decomposing the full-sample-period performance of DMAC trading strategies into distinct bullish- and bearish-period performances. The results show that the best DMAC rules have predictive power on future price trends, and these rules are able to outperform buy-and-hold strategy. Although the performance of the DMAC strategies is highly dependent on the combination of moving average lengths, the best DMAC rules of the first sub-period have also performed well during the latter sub-period in the case of individual stock trading strategies. According to the results, the outperformance of DMAC trading rules over buy-and-hold strategy is mostly attributed to their superiority during the bearish periods, and particularly, during stock market crashes.
Resumo:
The study investigates organisational learning and knowledge acquisition of wood-based prefabricated building manufacturers. This certain group of case companies was chosen, because their management and their employees generally have a strong manufacturing and engineering background, while the housing sector is characterised by national norms, regulations, as well as local building styles. Considering this setting, it was investigated, how the case companies develop organisational learning capabilities, acquire and transfer knowledge for their internationalisation. The theoretical framework of this study constitutes the knowledge-based conceptualisation of internationalisation, which combines the traditional internationalisation process, as well as the international new venture perspective based on their commonalities in the knowledge-based view of the firm. Different theories of internationalisation, including the network-perspective, were outlined and a framework on organisational learning and knowledge acquisition was established. The empirical research followed a qualitative approach, deploying a multiple-case study with five case companies from Austria, Finland and Germany. In the study, the development of the wood-based prefabricated building industry and of the case companies are described, and the motives, facilitators and challenges for foreign expansion, as well as the companies’ internationalisation approaches are compared. Different methods of how companies facilitate the knowledge-exchange or learn about new markets are also outlined. Experience, market knowledge and personal contacts are considered essential for the internationalisation process. The major finding of the study is that it is not necessary to acquire the market knowledge internally in a slow process as proposed by the Uppsala model. In four cases companies engaged knowledge in symbiotic relations with local business partners. Thereby, the building manufacturers contribute their design and production capabilities, and in return, their local partners provide them with knowledge about the market and local regulations; while they manage the sales and construction operations. Thus, the study provides strong evidence for the propositions of network perspective. One case company developed the knowledge internally in a gradual process: it entered the market sequentially with several business lines, showing an increasing level of complexity. In both of the observed strategies, single-loop and double-loop learning processes occurred.
Resumo:
The behavioural finance literature expects systematic and significant deviations from efficiency to persist in securities markets due to behavioural and cognitive biases of investors. These behavioural models attempt to explain the coexistence of intermediate-term momentum and long-term reversals in stock returns based on the systematic violations of rational behaviour of investors. The study investigates the anchoring bias of investors and the profitability of the 52-week momentum strategy (GH henceforward). The relatively highly volatile OMX Helsinki stock exchange is a suitable market for examining the momentum effect, since international investors tend to realise their positions first from the furthest security markets by the time of market turbulence. Empirical data is collected from Thomson Reuters Datastream and the OMX Nordic website. The objective of the study is to provide a throughout research by formulating a self-financing GH momentum portfolio. First, the seasonality of the strategy is examined by taking the January effect into account and researching abnormal returns in long-term. The results indicate that the GH strategy is subject to significantly negative revenues in January, but the strategy is not prone to reversals in long-term. Then the predictive proxies of momentum returns are investigated in terms of acquisition prices and 52-week high statistics as anchors. The results show that the acquisition prices do not have explanatory power over the GH strategy’s abnormal returns. Finally, the efficacy of the GH strategy is examined after taking transaction costs into account, finding that the robust abnormal returns remain statistically significant despite the transaction costs. As a conclusion, the relative distance between a stock’s current price and its 52-week high statistic explains the profits of momentum investing to a high degree. The results indicate that intermediateterm momentum and long-term reversals are separate phenomena. This presents a challenge to current behavioural theories, which model these aspects of stock returns as subsequent components of how securities markets respond to relevant information.
Resumo:
In the environment of ever-changing needs of customers, technologies and competitors, the survival of the company depends on how well it researches, develops and implements new products to the market. The need for development of new products relates to many factors: globalization, international competition which is now underway on a global scale, scientific advances and development of production, changes in consumer preferences and consumer behavior. In this study the focus is on the company form a dairy products industry. This study is aimed to defining the role of product innovation launch strategy in an overall enterprise strategy, and to select the optimal combination of its marketing tools. The main purpose of this study is to determine place and the role of innovative marketing based in the innovation process, and to determine launch and positioning strategies in the general concept of an innovative product. The object of the study is the Russian enterprise, which is aimed to achieve a competitive advantage through the continuous production of new products, upgrade existing ones and improve innovation management practices. Research showed that the differentiation strategy is suitable for launching the dairy industry product innovation to a market.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Electricity price forecasting has become an important area of research in the aftermath of the worldwide deregulation of the power industry that launched competitive electricity markets now embracing all market participants including generation and retail companies, transmission network providers, and market managers. Based on the needs of the market, a variety of approaches forecasting day-ahead electricity prices have been proposed over the last decades. However, most of the existing approaches are reasonably effective for normal range prices but disregard price spike events, which are caused by a number of complex factors and occur during periods of market stress. In the early research, price spikes were truncated before application of the forecasting model to reduce the influence of such observations on the estimation of the model parameters; otherwise, a very large forecast error would be generated on price spike occasions. Electricity price spikes, however, are significant for energy market participants to stay competitive in a market. Accurate price spike forecasting is important for generation companies to strategically bid into the market and to optimally manage their assets; for retailer companies, since they cannot pass the spikes onto final customers, and finally, for market managers to provide better management and planning for the energy market. This doctoral thesis aims at deriving a methodology able to accurately predict not only the day-ahead electricity prices within the normal range but also the price spikes. The Finnish day-ahead energy market of Nord Pool Spot is selected as the case market, and its structure is studied in detail. It is almost universally agreed in the forecasting literature that no single method is best in every situation. Since the real-world problems are often complex in nature, no single model is able to capture different patterns equally well. Therefore, a hybrid methodology that enhances the modeling capabilities appears to be a possibly productive strategy for practical use when electricity prices are predicted. The price forecasting methodology is proposed through a hybrid model applied to the price forecasting in the Finnish day-ahead energy market. The iterative search procedure employed within the methodology is developed to tune the model parameters and select the optimal input set of the explanatory variables. The numerical studies show that the proposed methodology has more accurate behavior than all other examined methods most recently applied to case studies of energy markets in different countries. The obtained results can be considered as providing extensive and useful information for participants of the day-ahead energy market, who have limited and uncertain information for price prediction to set up an optimal short-term operation portfolio. Although the focus of this work is primarily on the Finnish price area of Nord Pool Spot, given the result of this work, it is very likely that the same methodology will give good results when forecasting the prices on energy markets of other countries.
Resumo:
The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.
Resumo:
This thesis studies the possibility of using information on insiders’ transactions to forecast future stock returns after the implementation of Sarbanes Oxley Act in July 2003. Insider transactions between July 2003 and August 2009 are analysed with regression tests to identify the relationships between insiders’ transactions and future stock returns. This analysis is complemented with rudimentary bootstrapping procedures to verify the robustness of the findings. The underlying assumption of the thesis is that insiders constantly receive pieces of information that indicate future performance of the company. They may not be allowed to trade on large and tangible pieces of information but they can trade on accumulation of smaller, intangible pieces of information. Based on the analysis in the thesis insiders’ profits were found not to differ from the returns from broad stock index. However, their individual transactions were found to be linked to future stock returns. The initial model was found to be unstable but some of the predictive power could be sacrificed to achieve greater stability. Even after sacrificing some predictive power the relationship was significant enough to allow external investors to achieve abnormal profits after transaction costs and taxes. The thesis does not go into great detail about timing of transactions. Delay in publishing insiders’ transactions is not taken into account in the calculations and the closed windows are not studied in detail. The potential effects of these phenomena are looked into and they do not cause great changes in the findings. Additionally the remuneration policy of an insider or a company is not taken into account even though it most likely affects the trading patterns of insiders. Even with the limitations the findings offer promising opportunities for investors to improve their investment processes by incorporating additional information from insiders’ transaction into their decisions. The findings also raise questions on how insider trading should be regulated. Insiders achieve greater returns than other investors based on superior information. On the other hand, more efficient information transfer could warrant more lenient regulation. The fact that insiders’ returns are dominated by the large investment stake they maintain all the time in their own companies also speaks for more leniency. As Sarbanes Oxley Act considerably modified the insider trading landscape, this analysis provides information that has not been available before. The thesis also constitutes a thorough analysis of insider trading phenomenon which has previously been somewhat separated into several studies.
Resumo:
In recent decades, business intelligence (BI) has gained momentum in real-world practice. At the same time, business intelligence has evolved as an important research subject of Information Systems (IS) within the decision support domain. Today’s growing competitive pressure in business has led to increased needs for real-time analytics, i.e., so called real-time BI or operational BI. This is especially true with respect to the electricity production, transmission, distribution, and retail business since the law of physics determines that electricity as a commodity is nearly impossible to be stored economically, and therefore demand-supply needs to be constantly in balance. The current power sector is subject to complex changes, innovation opportunities, and technical and regulatory constraints. These range from low carbon transition, renewable energy sources (RES) development, market design to new technologies (e.g., smart metering, smart grids, electric vehicles, etc.), and new independent power producers (e.g., commercial buildings or households with rooftop solar panel installments, a.k.a. Distributed Generation). Among them, the ongoing deployment of Advanced Metering Infrastructure (AMI) has profound impacts on the electricity retail market. From the view point of BI research, the AMI is enabling real-time or near real-time analytics in the electricity retail business. Following Design Science Research (DSR) paradigm in the IS field, this research presents four aspects of BI for efficient pricing in a competitive electricity retail market: (i) visual data-mining based descriptive analytics, namely electricity consumption profiling, for pricing decision-making support; (ii) real-time BI enterprise architecture for enhancing management’s capacity on real-time decision-making; (iii) prescriptive analytics through agent-based modeling for price-responsive demand simulation; (iv) visual data-mining application for electricity distribution benchmarking. Even though this study is from the perspective of the European electricity industry, particularly focused on Finland and Estonia, the BI approaches investigated can: (i) provide managerial implications to support the utility’s pricing decision-making; (ii) add empirical knowledge to the landscape of BI research; (iii) be transferred to a wide body of practice in the power sector and BI research community.
Resumo:
This thesis investigates pricing of liquidity in the French stock market. The study covers 835 ordinary shares traded in the period of 1996-2014 on Paris Euronext. The author utilizes the Liquidity-Adjusted Capital Asset Pricing Model (LCAPM) recently developed by Acharya and Pedersen (2005) to test whether liquidity level and risks significantly affect stock returns. Three different liquidity measures – Amihud, FHT, and PQS – are incorporated into the model to find any difference between the results they could provide. It appears that the findings largely depend on the liquidity measure used. In general the results exhibit more evidence for insignificant influence of liquidity level and risks as well as market risk on stock returns. The similar conclusion was reported earlier by Lee (2011) for several regions, including France. This finding of the thesis, however, is not consistent across all the liquidity measures. Nevertheless, the difference in the results between these measures provides new insight to the existing literature on this topic. The Amihud-based findings might indicate that market resiliency is not priced in the French stock market. At the same time the contradicting results from FHT and PQS provide some foundation for the hypothesis that one of two leftover liquidity dimensions – market depth or breadth – could significantly affect stock returns. Therefore, the thesis’ findings suggest a conjecture that different liquidity dimensions have different impacts on stock returns.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
The aim of this research is to examine the pricing anomalies existing in the U.S. market during 1986 to 2011. The sample of stocks is divided into decile portfolios based on seven individual valuation ratios (E/P, B/P, S/P, EBIT/EV, EVITDA/EV, D/P, and CE/P) and price momentum to investigate the efficiency of individual valuation ratio and their combinations as portfolio formation criteria. This is the first time in financial literature when CE/P is employed as a constituent of composite value measure. The combinations are based on median scaled composite value measures and TOPSIS method. During the sample period value portfolios significantly outperform both the market portfolio and comparable glamour portfolios. The results show the highest return for the value portfolio that was based on the combination of S/P & CE/P ratios. The outcome of this research will increase the understanding on the suitability of different methodologies for portfolio selection. It will help managers to take advantage of the results of different methodologies in order to gain returns above the market.
Resumo:
In recent decade customer loyalty programs have become very popular and almost every retail chain seems to have one. Through the loyalty programs companies are able to collect information about the customer behavior and to use this information in business and marketing management to guide decision making and resource allocation. The benefits for the loyalty program member are often monetary, which has an effect on the profitability of the loyalty program. Not all the loyalty program members are equally profitable, as some purchase products for the recommended retail price and some buy only discounted products. If the company spends similar amount of resources to all members, it can be seen that the customer margin is lower on the customer who bought only discounted products. It is vital for a company to measure the profitability of their members in order to be able to calculate the customer value. To calculate the customer value several different customer value metrics can be used. During the recent years especially customer lifetime value has received a lot of attention and it is seen to be superior against other customer value metrics. In this master’s thesis the customer lifetime value is implemented on the case company’s customer loyalty program. The data was collected from the customer loyalty program’s database and represents year 2012 on the Finnish market. The data was not complete to fully take advantage of customer lifetime value and as a conclusion it can be stated that a new key performance indicator of customer margin should be acquired in order to profitably drive the business of the customer loyalty program. Through the customer margin the company would be able to compute the customer lifetime value on regular basis enabling efficient resource allocation in marketing.
Resumo:
Potential impacts of electrical capacity market design on capacity mobility and end use customer pricing are analyzed. Market rules and historical evolution are summarized to provide a background for the analysis. The summarized rules are then examined for impacts on capacity mobility. A summary of the aspects of successful capacity markets is provided. Two United States market regions are chosen for analysis based upon their market history and proximity to each other. The MISO region is chosen due to recent developments in capacity market mechanisms. The PJM region neighbors the MISO region and is similar in size and makeup. The PJM region has had a capacity market mechanism for over a decade and allows for a controlled comparison of the MISO region’s developments. Capacity rules are found to have an impact on the mobility of capacity between regions. Regulatory restrictions and financial penalties for the movement of capacity between regions are found which effectively hinder such mobility. Capacity market evolution timelines are formed from the historical evolution previously summarized and compared to historical pricing to inspect for a correlation. No direct and immediate impact on end use customer pricing was found due to capacity market design. The components of end use customer pricing are briefly examined.