846 resultados para Efficient market theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis analyzes how matching takes place at the Finnish labor market from three different angles. The Finnish labor market has undergone severe structural changes following the economic crisis in the early 1990s. The labor market has had problems adjusting from these changes and hence a high and persistent unemployment has followed. In this thesis I analyze if matching problems, and in particular if changes in matching, can explain some of this persistence. The thesis consists of three essays. In the first essay Finnish Evidence of Changes in the Labor Market Matching Process the matching process at the Finnish labor market is analyzed. The key finding is that the matching process has changed thoroughly between the booming 1980s and the post-crisis period. The importance of the number of unemployed, and in particular long-term unemployed, for the matching process has vanished. More unemployed do not increase matching as theory predicts but rather the opposite. In the second essay, The Aggregate Matching Function and Directed Search -Finnish Evidence, stock-flow matching as a potential micro foundation of the aggregate matching function is studied. In the essay I show that newly unemployed match mainly with the stock of vacancies while longer term unemployed match with the inflow of vacancies. When aggregating I still find evidence of the traditional aggregate matching function. This could explain the huge support the aggregate matching function has received despite its odd randomness assumption. The third essay, How do Registered Job Seekers really match? -Finnish occupational level Evidence, studies matching for nine occupational groups and finds that very different matching problems exist for different occupations. In this essay also misspecification stemming from non-corresponding variables is dealt with through the introduction of a completely new set of variables. The new outflow measure used is vacancies filled with registered job seekers and it is matched by the supply side measure registered job seekers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wealthy individuals - business angels who invest a share of their net worth in entrepreneurial ventures - form an essential part of an informal venture capital market that can secure funding for entrepreneurial ventures. In Finland, business angels represent an untapped pool of capital that can contribute to fostering entrepreneurial development. In addition, business angels can bridge knowledge gaps in new business ventures by means of making their human capital available. This study has two objectives. The first is to gain an understanding of the characteristics and investment behaviour of Finnish business angels. The strongest focus here is on the due diligence procedures and their involvement post investment. The second objective is to assess whether agency theory and the incomplete contacting theory are useful theoretical lenses in the arena of business angels. To achieve the second objective, this study investigates i) how risk is mitigated in the investment process, ii) how uncertainty influences the comprehensiveness of due diligence as well as iii) how control is allocated post investment. Research hypotheses are derived from assumptions underlying agency theory and the incomplete contacting theory. The data for this study comprise interviews with 53 business angels. In terms of sample size this is the largest on Finnish business angels. The research hypotheses in this study are tested using regression analysis. This study suggests that the Finnish informal venture capital market appears to be comprised of a limited number of business angels whose style of investing much resembles their formal counterparts’. Much focus is placed on managing risks prior to making the investment by strong selectiveness and by a relatively comprehensive due diligence. The involvement is rarely on a day-to-day basis and many business angels seem to see board membership as a more suitable alternative than involvement in the operations of an entrepreneurial venture. The uncertainty involved does not seem to drive an increase in due diligence. On the contrary, it would appear that due diligence is more rigorous in safer later stage investments and when the business angels have considerable previous experience as investors. Finnish business angels’ involvement post investment is best explained by their degree of ownership in the entrepreneurial venture. It seems that when investors feel they are sufficiently rewarded, in terms of an adequate equity stake, they are willing to involve themselves actively in their investments. The lack of support for a relationship between increased uncertainty and the comprehensiveness of due diligence may partly be explained by an increasing trend towards portfolio diversification. This is triggered by a taxation system that favours investments through investment companies rather than direct investments. Many business angels appear to have substituted a specialization strategy that builds on reducing uncertainty for a diversification strategy that builds on reducing firm specific (idiosyncratic) risk by holding shares in ventures whose returns are not expected to exhibit a strong positive correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents a new theory of internal marketing. The thesis has developed as a case study in retrospective action research. This began with the personal involvement of the author in an action research project for customer service improvement at a large Australian retail bank. In other words, much of the theory generating ‘research’ took place after the original project ‘action’ had wound down. The key theoretical proposition is that internal marketing is a relationship development strategy for the purpose of knowledge renewal. In the banking case, exchanges of value between employee participants emerged as the basis for relationship development, with synergistic benefits for customers, employees and the bank. Relationship development turned out to be the mediating variable between the learning activity of employee participants at the project level and success in knowledge renewal at the organisational level. Relationship development was also a pivotal factor in the motivation and customer consciousness of employees. The conclusion reached is that the strength of relationship-mediated internal marketing is in combining a market focused commitment and employee freedom in project work to achieve knowledge renewal. The forgotten truth is that organisational knowledge can be renewed through dialogue and learning, through being trustworthy, and by gaining the trust of employees in return.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an efficient technique is presented for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used for the control (medication) synthesis. First, taking a set of nominal parameters, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat nominal patients (patients having same nominal parameters as used for the control design) effectively. However, since the parameters of an actual patient can be different from that of the ideal patient, to make the treatment strategy more effective and efficient, a model-following neuro-adaptive controller is augmented to the nominal controller. In this approach, a neural network trained online (based on Lyapunov stability theory) facilitates a new adaptive controller, computed online. From the simulation studies, this adaptive control design approach (treatment strategy) is found to be very effective to treat the CML disease for actual patients. Sufficient generality is retained in the theoretical developments in this paper, so that the techniques presented can be applied to other similar problem as well. Note that the technique presented is computationally non-intensive and all computations can be carried out online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research investigates the impacts of agricultural market liberalization on food security in developing countries and it evaluates the supply perspective of food security. This research theme is applied on the agricultural sector in Kenya and in Zambia by studying the role policies played in the maize sub-sector. An evaluation of selected policies introduced at the beginning of the 1980s is made, as well as an assessment of whether those policies influenced maize output. A theoretical model of agricultural production is then formulated to reflect cereal production in a developing country setting. This study begins with a review of the general framework and the aims of the structural adjustment programs and proceeds to their application in the maize sector in Kenya and Zambia. A literature review of the supply and demand synthesis of food security is presented with examples from various developing countries. Contrary to previous studies on food security, this study assesses two countries with divergent economic orientations. Agricultural sector response to economic and institutional policies in different settings is also evaluated. Finally, a dynamic time series econometric model is applied to assess the effects of policy on maize output. The empirical findings suggest a weak policy influence on maize output, but the precipitation and acreage variables stand out as core determinants of maize output. The policy dimension of acreage and how markets influence it is not discussed at length in this study. Due to weak land rights and tenure structures in these countries, the direct impact of policy change on land markets cannot be precisely measured. Recurring government intervention during the structural policy implementation period impeded efficient functioning of input and output markets, particularly in Zambia. Input and output prices of maize and fertilizer responded more strongly in Kenya than in Zambia, where the state often ceded to public pressure by revoking pertinent policy measures. These policy interpretations are based on the response of policy variables which are more responsive in Kenya than in Zambia. According the obtained regression results, agricultural markets in general, and the maize sub-sector in particular, responded more positively to implemented policies in Kenya, than in Zambia, which supported a more socialist economic system. It is observed in these results that in order for policies to be effective, sector and regional dimensions need to be considered. The regional and sector dimensions were not taken into account in the formulation and implementation of structural adjustment policies in the 1980s. It can be noted that countries with vibrant economic structures and institutions fared better than those which had a firm, socially founded system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Competition is an immensely important area of study in economic theory, business and strategy. It is known to be vital in meeting consumers’ growing expectations, stimulating increase in the size of the market, pushing innovation, reducing cost and consequently generating better value for end users, among other things. Having said that, it is important to recognize that supply chains, as we know it, has changed the way companies deal with each other both in confrontational or conciliatory terms. As such, with the rise of global markets and outsourcing destinations, increased technological development in transportation, communication and telecommunications has meant that geographical barriers of distance with regards to competition are a thing of the past in an increasingly flat world. Even though the dominant articulation of competition within management and business literature rests mostly within economic competition theory, this thesis draws attention to the implicit shift in the recognition of other forms of competition in today’s business environment, especially those involving supply chain structures. Thus, there is popular agreement within a broad business arena that competition between companies is set to take place along their supply chains. Hence, management’s attention has been focused on how supply chains could become more aggressive making each firm in its supply chain more efficient. However, there is much disagreement on the mechanism through which such competition pitching supply chain against supply chain will take place. The purpose of this thesis therefore, is to develop and conceptualize the notion of supply chain vs. supply chain competition, within the discipline of supply chain management. The thesis proposes that competition between supply chains may be carried forward via the use of competition theories that emphasize interaction and dimensionality, hence, encountering friction from a number of sources in their search for critical resources and services. The thesis demonstrates how supply chain vs. supply chain competition may be carried out theoretically, using generated data for illustration, and practically using logistics centers as a way to provide a link between theory and corresponding practice of this evolving competition mode. The thesis concludes that supply chain vs. supply chain competition, no matter the conceptualization taken, is complex, novel and can be very easily distorted and abused. It therefore calls for the joint development of regulatory measures by practitioners and policymakers alike, to guide this developing mode of competition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic Exchanges are double-sided marketplaces that allows multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. In this paper, we propose a new design approach for an one-shot exchange that collects bids from buyers and sellers and clears the market at the end of the bidding period. The main principle of the approach is to decouple the allocation from pricing. It is well known that it is impossible for an exchange with voluntary participation to be efficient and budget-balanced. Budget-balance is a mandatory requirement for an exchange to operate in profit. Our approach is to allocate the trade to maximize the reported values of the agents. The pricing is posed as payoff determination problem that distributes the total payoff fairly to all agents with budget-balance imposed as a constraint. We devise an arbitration scheme by axiomatic approach to solve the payoff determination problem using the added-value concept of game theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of mining targeted association rules over multidimensional market-basket data. Here, each transaction has, in addition to the set of purchased items, ancillary dimension attributes associated with it. Based on these dimensions, transactions can be visualized as distributed over cells of an n-dimensional cube. In this framework, a targeted association rule is of the form {X -> Y} R, where R is a convex region in the cube and X. Y is a traditional association rule within region R. We first describe the TOARM algorithm, based on classical techniques, for identifying targeted association rules. Then, we discuss the concepts of bottom-up aggregation and cubing, leading to the CellUnion technique. This approach is further extended, using notions of cube-count interleaving and credit-based pruning, to derive the IceCube algorithm. Our experiments demonstrate that IceCube consistently provides the best execution time performance, especially for large and complex data cubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bentonite clays are proven to be attractive as buffer and backfill material in high-level nuclear waste repositories around the world. A quick estimation of swelling pressures of the compacted bentonites for different clay-water-electrolyte interactions is essential in the design of buffer and backfill materials. The theoretical studies on the swelling behavior of bentonites are based on diffuse double layer (DDL) theory. To establish theoretical relationship between void ratio and swelling pressure (e versus P), evaluation of elliptic integral and inverse analysis are unavoidable. In this paper, a novel procedure is presented to establish theoretical relationship of e versus P based on the Gouy-Chapman method. The proposed procedure establishes a unique relationship between electric potentials of interacting and non-interacting diffuse clay-water-electrolyte systems. A procedure is, thus, proposed to deduce the relation between swelling pressures and void ratio from the established relation between electric potentials. This approach is simple and alleviates the need for elliptic integral evaluation and also the inverse analysis. Further, application of the proposed approach to estimate swelling pressures of four compacted bentonites, for example, MX 80, Febex, Montigel and Kunigel V1, at different dry densities, shows that the method is very simple and predicts solutions with very good accuracy. Moreover, the proposed procedure provides continuous distributions of e versus P and thus it is computationally efficient when compared with the existing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new approach to clustering. Our idea is to map cluster formation to coalition formation in cooperative games, and to use the Shapley value of the patterns to identify clusters and cluster representatives. We show that the underlying game is convex and this leads to an efficient biobjective clustering algorithm that we call BiGC. The algorithm yields high-quality clustering with respect to average point-to-center distance (potential) as well as average intracluster point-to-point distance (scatter). We demonstrate the superiority of BiGC over state-of-the-art clustering algorithms (including the center based and the multiobjective techniques) through a detailed experimentation using standard cluster validity criteria on several benchmark data sets. We also show that BiGC satisfies key clustering properties such as order independence, scale invariance, and richness.