901 resultados para G14 - Information and market efficiency
Resumo:
Summary. Energy saving has been a stated policy objective of the EU since the 1970s. Presently, the 2020 target is a 20% reduction of EU energy consumption in comparison with current projections for 2020. This is one of the headline targets of the European Energy Strategy 2020 but efforts to achieve it remain slow and insufficient. The aim of this paper is to understand why this is happening. Firstly, this paper examines the reasons why public measures promoting energy efficiency are needed and what form these measures should optimally take (§ 1). Fortunately, over the last 20 years, much research has been done into the famous ‘energy efficiency gap’ (or ‘the energy efficiency paradox’), even if more remains to be done. Multiple explanations have been given: market failures, modelling flaws and behavioural obstacles. Each encompasses many complex aspects. Several types of instruments can be adopted to encourage energy efficiency: measures guaranteeing the correct pricing of energy are preferred, followed by taxes or tradable white certificates which in turn are preferred to standards or subsidies. Information programmes are also necessary. Secondly, the paper analyzes the evolution of the different programmes from 2000 onwards (§ 2). This reveals the extreme complexity of the subject. It deals with quite diverse topics: buildings, appliances, public sector, industry and transport. The market for energy efficiency is as diffuse as energy consumption patterns themselves. It is composed of many market actors who demand more efficient provision of energy services, and that suppliers of the necessary goods and know-how deliver this greater efficiency. Consumers in this market include individuals, businesses and governments, and market activities cover all energy-consuming sectors of the economy. Additionally, energy efficiency is the perfect example of a shared competence between the EU and the Member States. Lastly, the legal framework has steadily increased in complexity, and despite the successive energy efficiency programmes used to build this framework, it has become clear that the gap between the target and the results remains. The paper then examines whether the 2012/27/EU Directive adopted to improve the situation could bring better results. It briefly describes the content of this framework Directive, which accompanies and implements the latest energy efficiency programme (§ 3). Although the Directive is technically complex and maintains nonbinding energy efficiency targets, it certainly represents an improvement in several aspects. However, it is also saddled with a multiplicity of exemption clauses and interpretative documents (with no binding value) which weaken its provisions. Furthermore, alone, it will allow the achievement of only about 17.7% of final energy savings by 2020. The implementation process, which is essential, also remains fairly weak. The paper also gives a glimpse of the various EU instruments for financing energy efficiency projects (§ 4). Though useful, they do not indicate a strong priority. Fourthly, the paper tries to analyze the EU’s limited progress so far and gather a few suggestions for improvement. One thing seems to remain useful: targets which can be defined in various ways (§ 5). Basically, all this indicates that the EU energy efficiency strategy has so far failed to reach its targets, lacks coherence and remains ambiguous. In the new Commission’s proposals of 22 January 2014 – intended to define a new climate/energy package in the period from 2020 to 2030 – the approach to energy efficiency remains unclear. This is regrettable. Energy efficiency is the only instrument which allows the EU to reach simultaneously its three targets: sustainability, competitiveness and security. The final conclusion appears thus paradoxical. On the one hand, all existing studies indicate that the decarbonization of the EU economy will be absolutely impossible without some very serious improvements in energy efficiency. On the other hand, in reality energy efficiency has always been treated as a second zone priority. It is imperative to eliminate this contradiction.
Resumo:
This paper examines the impact of multinational trade accords on the degree of stock market linkage using NAFTA as a case study. Besides liberalizing trade among the U.S., Canada and Mexico, NAFTA has also sought to strengthen linkage among stock markets of these countries. If successful, this could lessen the appeal of asset diversification across the North American region and promote a higher degree of market efficiency. We assess the possible impact of NAFTA on market linkage using cross-correlations, multivariate price cointegrating systems, speed of convergence, and generalized variance decompositions of unexpected stock returns. The evidence proves robust and consistently indicates intensified equity market linkage since the NAFTA accord. The results also suggest that interdependent goods markets in the region are a primary reason behind the stronger equity market linkage observed in the post-NAFTA period. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Non-technical losses (NTL) identification and prediction are important tasks for many utilities. Data from customer information system (CIS) can be used for NTL analysis. However, in order to accurately and efficiently perform NTL analysis, the original data from CIS need to be pre-processed before any detailed NTL analysis can be carried out. In this paper, we propose a feature selection based method for CIS data pre-processing in order to extract the most relevant information for further analysis such as clustering and classifications. By removing irrelevant and redundant features, feature selection is an essential step in data mining process in finding optimal subset of features to improve the quality of result by giving faster time processing, higher accuracy and simpler results with fewer features. Detailed feature selection analysis is presented in the paper. Both time-domain and load shape data are compared based on the accuracy, consistency and statistical dependencies between features.
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks. © 2010 Springer Science+Business Media, LLC.
Resumo:
In this paper we examine the impact that the new trading system SETSmm had on market quality measures such as firm value, liquidity and pricing efficiency. This system was introduced for mid-cap securities on the London Stock Exchange in 2003. We show that there is a small SETSmm return premium associated with the announcement that securities are to migrate to the new trading system. We find that migration to SETSmm also improves liquidity and pricing efficiency and these changes are related to the return premium. We also find that these gains are stronger for firms with high pre SETSmm liquidity and weaker for firms with low SETSmm liquidity.
Resumo:
This paper studies the impact that a change from a dealer system to a market-maker supported auction system has on market quality. We study the impact that the introduction of SETSmm at the London Stock Exchange had on firm value, price efficiency and liquidity. We discover a small SETSmm return premium associated with the announcement that securities are to migrate to the new trading system. Moreover, securities that migrate to SETSmm are characterized by improvements to liquidity and pricing efficiency. We find that these changes are related to the return premium.
Resumo:
In this paper we examine the impact that the new trading system SETSmm had on market quality measures such as firm value, liquidity and pricing efficiency. This system was introduced for mid-cap securities on the London Stock Exchange in 2003. We show that there is a small SETSmm return premium associated with the announcement that securities are to migrate to the new trading system. We find that migration to SETSmm also improves liquidity and pricing efficiency and these changes are related to the return premium. We also find that these gains are stronger for firms with high pre SETSmm liquidity and weaker for firms with low SETSmm liquidity. © 2013 John Wiley & Sons Ltd.
Resumo:
The UK government aims at achieving 80% CO2 emission reduction by 2050 which requires collective efforts across all the UK industry sectors. In particular, the housing sector has a large potential to contribute to achieving the aim because the housing sector alone accounts for 27% of the total UK CO2 emission, and furthermore, 87% of the housing which is responsible for current 27% CO2 emission will still stand in 2050. Therefore, it is essential to improve energy efficiency of existing housing stock built with low energy efficiency standard. In order for this, a whole‐house needs to be refurbished in a sustainable way by considering the life time financial and environmental impacts of a refurbished house. However, the current refurbishment process seems to be challenging to generate a financially and environmentally affordable refurbishment solution due to the highly fragmented nature of refurbishment practice and a lack of knowledge and skills about whole‐house refurbishment in the construction industry. In order to generate an affordable refurbishment solution, diverse information regarding costs and environmental impacts of refurbishment measures and materials should be collected and integrated in right sequences throughout the refurbishment project life cycle among key project stakeholders. Consequently, various researchers increasingly study a way of utilizing Building Information Modelling (BIM) to tackle current problems in the construction industry because BIM can support construction professionals to manage construction projects in a collaborative manner by integrating diverse information, and to determine the best refurbishment solution among various alternatives by calculating the life cycle costs and lifetime CO2 performance of a refurbishment solution. Despite the capability of BIM, the BIM adoption rate is low with 25% in the housing sector and it has been rarely studied about a way of using BIM for housing refurbishment projects. Therefore, this research aims to develop a BIM framework to formulate a financially and environmentally affordable whole‐house refurbishment solution based on the Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) methods simultaneously. In order to achieve the aim, a BIM feasibility study was conducted as a pilot study to examine whether BIM is suitable for housing refurbishment, and a BIM framework was developed based on the grounded theory because there was no precedent research. After the development of a BIM framework, this framework was examined by a hypothetical case study using BIM input data collected from questionnaire survey regarding homeowners’ preferences for housing refurbishment. Finally, validation of the BIM framework was conducted among academics and professionals by providing the BIM framework and a formulated refurbishment solution based on the LCC and LCA studies through the framework. As a result, BIM was identified as suitable for housing refurbishment as a management tool, and it is timely for developing the BIM framework. The BIM framework with seven project stages was developed to formulate an affordable refurbishment solution. Through the case study, the Building Regulation is identified as the most affordable energy efficiency standard which renders the best LCC and LCA results when it is applied for whole‐house refurbishment solution. In addition, the Fabric Energy Efficiency Standard (FEES) is recommended when customers are willing to adopt high energy standard, and the maximum 60% of CO2 emissions can be reduced through whole‐house fabric refurbishment with the FEES. Furthermore, limitations and challenges to fully utilize BIM framework for housing refurbishment were revealed such as a lack of BIM objects with proper cost and environmental information, limited interoperability between different BIM software and limited information of LCC and LCA datasets in BIM system. Finally, the BIM framework was validated as suitable for housing refurbishment projects, and reviewers commented that the framework can be more practical if a specific BIM library for housing refurbishment with proper LCC and LCA datasets is developed. This research is expected to provide a systematic way of formulating a refurbishment solution using BIM, and to become a basis for further research on BIM for the housing sector to resolve the current limitations and challenges. Future research should enhance the BIM framework by developing more detailed process map and develop BIM objects with proper LCC and LCA Information.
Resumo:
The information domain is a recognised sphere for the influence, ownership, and control of information and it's specifications, format, exploitation and explanation (Thompson, 1967). The article presents a description of the financial information domain issues related to the organisation and operation of a stock market. We review the strategic, institutional and standards dimensions of the stock market information domain in relation to the current semantic web knowledge and how and whether this could be used in modern web based stock market information systems to provide the quality of information that their stakeholders want. The analysis is based on the FINE model (Blanas, 2003). The analysis leads to a number of research questions for future research.
Resumo:
Performance analysis has become a vital part of the management practices in the banking industry. There are numerous applications using DEA models to estimate efficiency in banking, and most of them assume that inputs and outputs are known with absolute precision. Here, we propose new Fuzzy-DEA α-level models to assess underlying uncertainty. Further, bootstrap truncated regressions with fixed factors are used to measure the impact of each model on the efficiency scores and to identify the most relevant contextual variables on efficiency. The proposed models have been demonstrated using an application in Mozambican banks to handle the underlying uncertainty. Findings reveal that fuzziness is predominant over randomness in interpreting the results. In addition, fuzziness can be used by decision-makers to identify missing variables to help in interpreting the results. Price of labor, price of capital, and market-share were found to be the significant factors in measuring bank efficiency. Managerial implications are addressed.
Resumo:
Fizikai példákon és matematikai modelleken bemutatjuk, hogy a rendszerek működésének hatékonyságnövekedése instabilitást eredményezhet. Megvizsgáljuk, hogy az informatika és a telekommunikáció fejlődése okozhat-e rendszerszintű instabilitást, illetve milyen gazdasági eszközök vannak a stabilitás fenntartására. / === / Using examples from physics and mathematical modeling, the paper shows that increasing efficiency in systems can lead to instability. The question thus arises whether the development of information and telecommunication technology can lead to instability in the economic system. The policy tools used to maintain stability are also discussed.
Resumo:
Bankruptcy prediction has been a fruitful area of research. Univariate analysis and discriminant analysis were the first methodologies used. While they perform relatively well at correctly classifying bankrupt and nonbankrupt firms, their predictive ability has come into question over time. Univariate analysis lacks the big picture that financial distress entails. Multivariate discriminant analysis requires stringent assumptions that are violated when dealing with accounting ratios and market variables. This has led to the use of more complex models such as neural networks. While the accuracy of the predictions has improved with the use of more technical models, there is still an important point missing. Accounting ratios are the usual discriminating variables used in bankruptcy prediction. However, accounting ratios are backward-looking variables. At best, they are a current snapshot of the firm. Market variables are forward-looking variables. They are determined by discounting future outcomes. Microstructure variables, such as the bid-ask spread, also contain important information. Insiders are privy to more information that the retail investor, so if any financial distress is looming, the insiders should know before the general public. Therefore, any model in bankruptcy prediction should include market and microstructure variables. That is the focus of this dissertation. The traditional models and the newer, more technical models were tested and compared to the previous literature by employing accounting ratios, market variables, and microstructure variables. Our findings suggest that the more technical models are preferable, and that a mix of accounting and market variables are best at correctly classifying and predicting bankrupt firms. Multi-layer perceptron appears to be the most accurate model following the results. The set of best discriminating variables includes price, standard deviation of price, the bid-ask spread, net income to sale, working capital to total assets, and current liabilities to total assets.
Resumo:
I demonstrate a powerful tension between acquiring information and incorporating it into asset prices, the two core elements of price discovery. As a salient case, I focus on the transformative rise of algorithmic trading (AT) typically associated with improved price efficiency. Using a measure of the relative information content of prices and a comprehensive panel of 37,325 stock-quarters of SEC market data, I establish instead that algorithmic trading strongly decreases the net amount of information in prices. The increase in price distortions associated with the AT “information gap” is roughly $42.6 billion/year for U.S. common stocks around earnings announcement events alone. Information losses are concentrated among stocks with high shares of algorithmic liquidity takers relative to algorithmic liquidity makers, suggesting that aggressive AT powerfully deters fundamental information acquisition despite its importance for translating available information into prices.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.