948 resultados para Market dynamics
Resumo:
Commercialization efforts to diffuse sustainable energy technologies (SETs1) have so far remained as the biggest challenge in the field of renewable energy and energy efficiency. Limited success of diffusion through government driven pathways urges the need for market based approaches. This paper reviews the existing state of commercialization of SETs in the backdrop of the basic theory of technology diffusion. The different SETs in India are positioned in the technology diffusion map to reflect their slow state of commercialization. The dynamics of SET market is analysed to identify the issues, barriers and stakeholders in the process of SET commercialization. By upgrading the ‘potential adopters’ to ‘techno-entrepreneurs’, the study presents the mechanisms for adopting a private sector driven ‘business model’ approach for successful diffusion of SETs. This is expected to integrate the processes of market transformation and entrepreneurship development with innovative regulatory, marketing, financing, incentive and delivery mechanisms leading to SET commercialization.
Resumo:
Ultrathin films at fluid interfaces are important not only from a fundamental point of view as 2D complex fluids but have also become increasingly relevant in the development of novel functional materials. There has been an explosion in the synthesis work in this area over the last decade, giving rise to many exotic nanostructures at fluid interfaces. However, the factors controlling particle nucleation, growth and self-assembly at interfaces are poorly understood on a quantitative level. We will outline some of the recent attempts in this direction. Some of the selected investigations examining the macroscopic mechanical properties of molecular and particulate films at fluid interfaces will be reviewed. We conclude with a discussion of the electronic properties of these films that have potential technological and biological applications.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
Triggered by the very quick proliferation of Internet connectivity, electronic document management (EDM) systems are now rapidly being adopted for managing the documentation that is produced and exchanged in construction projects. Nevertheless there are still substantial barriers to the efficient use of such systems, mainly of a psychological nature and related to insufficient training. This paper presents the results of empirical studies carried out during 2002 concerning the current usage of EDM systems in the Finnish construction industry. The studies employed three different methods in order to provide a multifaceted view of the problem area, both on the industry and individual project level. In order to provide an accurate measurement of overall usage volume in the industry as a whole telephone interviews with key personnel from 100 randomly chosen construction projects were conducted. The interviews showed that while around 1/3 of big projects already have adopted the use of EDM, very few small projects have adopted this technology. The barriers to introduction were investigated through interviews with representatives for half a dozen of providers of systems and ASP-services. These interviews shed a lot of light on the dynamics of the market for this type of services and illustrated the diversity of business strategies adopted by vendors. In the final study log files from a project which had used an EDM system were analysed in order to determine usage patterns. The results illustrated that use is yet incomplete in coverage and that only a part of the individuals involved in the project used the system efficiently, either as information producers or consumers. The study also provided feedback on the usefulness of the log files.
Resumo:
Critical organization scholars have focused increasing attention on industrial and organizational restructurings such as shutdown decisions. However, we know little about the rhetorical strategies used to legitimate or resist plant closures in organizational negotiations. In this paper, we draw from New Rhetoric to analyze rhetorical struggles, strategies and dynamics in unfolding organizational negotiations. We focus on the shutdown of the bus body unit of the Swedish company Volvo in Finland. We distinguish five types of rhetorical legitimation strategies and dynamics. These include the three classical dynamics of logos (rational arguments), pathos (emotional moral arguments), and ethos (authority-based arguments), but also autopoiesis (autopoietic narratives), and cosmos (cosmological constructions). Our analysis adds to the previous studies explaining how organizational restructuring as a phenomenon is legitimated, how this legitimation has changed over time, and how contemporary industrial closures are legitimated in the media. This study also increases our theoretical understanding of the role of rhetoric in legitimation more generally.
Resumo:
Presented here is the two-phase thermodynamic (2PT) model for the calculation of energy and entropy of molecular fluids from the trajectory of molecular dynamics (MD) simulations. In this method, the density of state (DoS) functions (including the normal modes of translation, rotation, and intramolecular vibration motions) are determined from the Fourier transform of the corresponding velocity autocorrelation functions. A fluidicity parameter (f), extracted from the thermodynamic state of the system derived from the same MD, is used to partition the translation and rotation modes into a diffusive, gas-like component (with 3Nf degrees of freedom) and a nondiffusive, solid-like component. The thermodynamic properties, including the absolute value of entropy, are then obtained by applying quantum statistics to the solid component and applying hard sphere/rigid rotor thermodynamics to the gas component. The 2PT method produces exact thermodynamic properties of the system in two limiting states: the nondiffusive solid state (where the fluidicity is zero) and the ideal gas state (where the fluidicity becomes unity). We examine the 2PT entropy for various water models (F3C, SPC, SPC/E, TIP3P, and TIP4P-Ew) at ambient conditions and find good agreement with literature results obtained based on other simulation techniques. We also validate the entropy of water in the liquid and vapor phases along the vapor-liquid equilibrium curve from the triple point to the critical point. We show that this method produces converged liquid phase entropy in tens of picoseconds, making it an efficient means for extracting thermodynamic properties from MD simulations.
Resumo:
The liquidity crisis that swept through the financial markets in 2007 triggered multi-billion losses and forced buyouts of some large banks. The resulting credit crunch is sometimes compared to the great recession in the early twentieth century. But the crisis also serves as a reminder of the significance of the interbank market and of proper central bank policy in this market. This thesis deals with implementation of monetary policy in the interbank market and examines how central bank tools affect commercial banks' decisions. I answer the following questions: • What is the relationship between the policy setup and interbank interest rate volatility? (averaging reserve requirement reduces the volatility) • What can explain a weak relationship between market liquidity and the interest rate? (high reserve requirement buffer) • What determines banks' decisions on when to satisfy the reserve requirement? (market frictions) • How did the liquidity crisis that began in 2007 affect interbank market behaviour? (resulted in higher credit risk and trading frictions as well as expected liquidity shortage)
Resumo:
This thesis analyzes how matching takes place at the Finnish labor market from three different angles. The Finnish labor market has undergone severe structural changes following the economic crisis in the early 1990s. The labor market has had problems adjusting from these changes and hence a high and persistent unemployment has followed. In this thesis I analyze if matching problems, and in particular if changes in matching, can explain some of this persistence. The thesis consists of three essays. In the first essay Finnish Evidence of Changes in the Labor Market Matching Process the matching process at the Finnish labor market is analyzed. The key finding is that the matching process has changed thoroughly between the booming 1980s and the post-crisis period. The importance of the number of unemployed, and in particular long-term unemployed, for the matching process has vanished. More unemployed do not increase matching as theory predicts but rather the opposite. In the second essay, The Aggregate Matching Function and Directed Search -Finnish Evidence, stock-flow matching as a potential micro foundation of the aggregate matching function is studied. In the essay I show that newly unemployed match mainly with the stock of vacancies while longer term unemployed match with the inflow of vacancies. When aggregating I still find evidence of the traditional aggregate matching function. This could explain the huge support the aggregate matching function has received despite its odd randomness assumption. The third essay, How do Registered Job Seekers really match? -Finnish occupational level Evidence, studies matching for nine occupational groups and finds that very different matching problems exist for different occupations. In this essay also misspecification stemming from non-corresponding variables is dealt with through the introduction of a completely new set of variables. The new outflow measure used is vacancies filled with registered job seekers and it is matched by the supply side measure registered job seekers.
Resumo:
A functioning stock market is an essential component of a competitive economy, since it provides a mechanism for allocating the economy’s capital stock. In an ideal situation, the stock market will steer capital in a manner that maximizes the total utility of the economy. As prices of traded stocks depend on and vary with information available to investors, it is apparent that information plays a crucial role in a functioning stock market. However, even though information indisputably matters, several issues regarding how stock markets process and react to new information still remain unanswered. The purpose of this thesis is to explore the link between new information and stock market reactions. The first essay utilizes new methodological tools in order to investigate the average reaction of investors to new financial statement information. The second essay explores the behavior of different types of investors when new financial statement information is disclosed to the market. The third essay looks into the interrelation between investor size, behavior and overconfidence. The fourth essay approaches the puzzle of negative skewness in stock returns from an altogether different angle than previous studies. The first essay presents evidence of the second derivatives of some financial statement signals containing more information than the first derivatives. Further, empirical evidence also indicates that some of the investigated signals proxy risk while others contain information priced with a delay. The second essay documents different categories of investors demonstrating systematical differences in their behavior when new financial statement information arrives to the market. In addition, a theoretical model building on differences in investor overconfidence is put forward in order to explain the observed behavior. The third essay shows that investor size describes investor behavior very well. This finding is predicted by the model proposed in the second essay, and hence strengthens the model. The behavioral differences between investors of different size furthermore have significant economic implications. Finally, the fourth essay finds strong evidence of management news disclosure practices causing negative skewness in stock returns.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.
Resumo:
The integrated European debt capital market has undoubtedly broadened the possibilities for companies to access funding from the public and challenged investors to cope with an ever increasing complexity of its market participants. Well into the Euro-era, it is clear that the unified market has created potential for all involved parties, where investment opportunities are able to meet a supply of funds from a broad geographical area now summoned under a single currency. Europe’s traditionally heavy dependency on bank lending as a source of debt capital has thus been easing as corporate residents are able to tap into a deep and liquid capital market to satisfy their funding needs. As national barriers eroded with the inauguration of the Euro and interest rates for the EMU-members converged towards over-all lower yields, a new source of debt capital emerged to the vast majority of corporate residents under the new currency and gave an alternative to the traditionally more maturity-restricted bank debt. With increased sophistication came also an improved knowledge and understanding of the market and its participants. Further, investors became more willing to bear credit risk, which opened the market for firms of ever lower creditworthiness. In the process, the market as a whole saw a change in the profile of issuers, as non-financial firms increasingly sought their funding directly from the bond market. This thesis consists of three separate empirical studies on how corporates fund themselves on the European debt capital markets. The analysis focuses on a firm’s access to and behaviour on the capital market, subsequent the decision to raise capital through the issuance of arm’s length debt on the bond market. The specific areas considered are contributing to our knowledge in the fields of corporate finance and financial markets by considering explicitly firms’ primary market activities within the new market area. The first essay explores how reputation of an issuer affects its debt issuance. Essay two examines the choice of interest rate exposure on newly issued debt and the third and final essay explores pricing anomalies on corporate debt issues.
Resumo:
This doctoral dissertation takes a buy side perspective to third-party logistics (3PL) providers’ service tiering by applying a linear serial dyadic view to transactions. It takes its point of departure not only from the unalterable focus on the dyad levels as units of analysis and how to manage them, but also the characteristics both creating and determining purposeful conditions for a longer duration. A conceptual framework is proposed and evaluated on its ability to capture logistics service buyers’ perceptions of service tiering. The problem discussed is in the theoretical context of logistics and reflects value appropriation, power dependencies, visibility in linear serial dyads, a movement towards the more market governed modes of transactions (i.e. service tiering) and buyers’ risk perception of broader utilisation of the logistics services market. Service tiering, in a supply chain setting, with the lack of multilateral agreements between supply chain members, is new. The deductive research approach applied, in which theoretically based propositions are empirically tested with quantitative and qualitative data, provides new insight into (contractual) transactions in 3PL. The study findings imply that the understanding of power dependencies and supply chain dynamics in a 3PL context is still in its infancy. The issues found include separation of service responsibilities, supply chain visibility, price-making behaviour and supply chain strategies under changing circumstances or influence of non-immediate supply chain actors. Understanding (or failing to understand) these issues may mean remarkable implications for the industry. Thus, the contingencies may trigger more open-book policies, larger liability scope of 3PL service providers or insourcing of critical logistics activities from the first-tier buyer core business and customer service perspectives. In addition, a sufficient understanding of the issues surrounding service tiering enables proactive responses to devise appropriate supply chain strategies. The author concludes that qualitative research designs, facilitating data collection on multiple supply chain actors, may capture and increase understanding of the impact of broader supply chain strategies. This would enable pattern-matching through an examination of two or more sides of exchange transactions to measure relational symmetries across linear serial dyads. Indeed, the performance of the firm depends not only on how efficiently it cooperates with its partners, but also on how well exchange partners cooperate with an organisation’s own business.
Resumo:
In this paper I investigate the exercise policy, and the market reaction to that, of the executive stock option holders in Finland. The empirical tests are conducted with aggregated firm level data from 34 firms and 41 stock option programs. I find some evidence of an inverse relation between the exercise intensity of the options holders and the future abnormal return of the company share price. This finding is supported by the view that information about future company prospect seems to be the only theoretical attribute that could delay the exercise of the options. Moreover, a high concentration of exercises in the beginning of the exercise window is predicted and the market is expected to react to deviations from this. The empirical findings however show that the market does not react homogenously to the information revealed by the late exercises.