896 resultados para Multi-market oligopolies, networks, externalities
Resumo:
Business angels are natural persons who provide equity financing for young enterprises and gain ownership in them. They are usually anonym investors and they operate in the background of the companies. Their important feature is that over the funding of the enterprises based on their business experiences they can contribute to the success of the companies with their special expertise and with strategic support. As a result of the asymmetric information between the angels and the companies their matching is difficult (Becsky-Nagy – Fazekas 2015), and the fact, that angel investors prefer anonymity makes it harder for entrepreneurs to obtain informal venture capital. The primary aim of the different type of business angel organizations and networks is to alleviate this matching process with intermediation between the two parties. The role of these organizations is increasing in the informal venture capital market compared to the individually operating angels. The recognition of their economic importance led many governments to support them. There were also public initiations that aimed the establishment of these intermediary organizations that led to the institutionalization of business angels. This study via the characterization of business angels focuses on the progress of these informational intermediaries and their ways of development with regards to the international trends and the current situation of Hungarian business angels and angel networks.
Resumo:
The Internet has become an integral part of our nation’s critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a ‘distance metric’. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.
Resumo:
Bankruptcy prediction has been a fruitful area of research. Univariate analysis and discriminant analysis were the first methodologies used. While they perform relatively well at correctly classifying bankrupt and nonbankrupt firms, their predictive ability has come into question over time. Univariate analysis lacks the big picture that financial distress entails. Multivariate discriminant analysis requires stringent assumptions that are violated when dealing with accounting ratios and market variables. This has led to the use of more complex models such as neural networks. While the accuracy of the predictions has improved with the use of more technical models, there is still an important point missing. Accounting ratios are the usual discriminating variables used in bankruptcy prediction. However, accounting ratios are backward-looking variables. At best, they are a current snapshot of the firm. Market variables are forward-looking variables. They are determined by discounting future outcomes. Microstructure variables, such as the bid-ask spread, also contain important information. Insiders are privy to more information that the retail investor, so if any financial distress is looming, the insiders should know before the general public. Therefore, any model in bankruptcy prediction should include market and microstructure variables. That is the focus of this dissertation. The traditional models and the newer, more technical models were tested and compared to the previous literature by employing accounting ratios, market variables, and microstructure variables. Our findings suggest that the more technical models are preferable, and that a mix of accounting and market variables are best at correctly classifying and predicting bankrupt firms. Multi-layer perceptron appears to be the most accurate model following the results. The set of best discriminating variables includes price, standard deviation of price, the bid-ask spread, net income to sale, working capital to total assets, and current liabilities to total assets.
Resumo:
In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.
Resumo:
Ongoing debates within the professional and academic communities have raised a number of questions specific to the international audit market. This dissertation consists of three related essays that address such issues. First, I examine whether the propensity to switch between auditors of different sizes (i.e., Big 4 versus non-Big 4) changes as adoption of International Financial Reporting Standards (IFRS) becomes a more common phenomenon, arguing that smaller auditors have an opportunity to invest in necessary skills and training needed to enter this market. Findings suggest that clients are relatively less (more) likely to switch to (away from) a Big 4 auditor if the client's adoption of IFRS occurs in more recent years. ^ In the second essay, I draw on these inferences and test whether the change in audit fees in the year of IFRS adoption changes over time. As the market becomes less concentrated, larger auditors becomes less able to demand a premium for their services. Consistent with my arguments, results suggest that the change in audit service fees declines over time, although this effect seems concentrated among the Big 4. I also find that this effect is partially attributable to a differential effect of the auditors' experience in pricing audit services related to IFRS based on the period in which adoption occurs. The results of these two essays offer important implications to policy debates on the costs and benefits of IFRS adoption. ^ In the third essay, I differentiate Big 4 auditors into three classifications—Parent firms, Brand Name affiliates, and Local affiliates—and test for differences in audit fee premiums (relative to non-Big 4 auditors) and audit quality. Results suggest that there is significant heterogeneity between the three classifications based on both of these characteristics, which is an important consideration for future research. Overall, this dissertation provides additional insights into a variety of aspects of the global audit market.^
Resumo:
The Internet has become an integral part of our nation's critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a 'distance metric'. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Complex network theory is a framework increasingly used in the study of air transport networks, thanks to its ability to describe the structures created by networks of flights, and their influence in dynamical processes such as delay propagation. While many works consider only a fraction of the network, created by major airports or airlines, for example, it is not clear if and how such sampling process bias the observed structures and processes. In this contribution, we tackle this problem by studying how some observed topological metrics depend on the way the network is reconstructed, i.e. on the rules used to sample nodes and connections. Both structural and simple dynamical properties are considered, for eight major air networks and different source datasets. Results indicate that using a subset of airports strongly distorts our perception of the network, even when just small ones are discarded; at the same time, considering a subset of airlines yields a better and more stable representation. This allows us to provide some general guidelines on the way airports and connections should be sampled.
Resumo:
The article examines developments in the marketisation and privatisation of the English National Health Service, primarily since 1997. It explores the use of competition and contracting out in ancillary services and the levering into public services of private finance for capital developments through the Private Finance Initiative. A substantial part of the article examines the repeated restructuring of the health service as a market in clinical services, initially as an internal market but subsequently as a market increasing opened up to private sector involvement. Some of the implications of market processes for NHS staff and for increased privatisation are discussed. The article examines one episode of popular resistance to these developments, namely the movement of opposition to the 2011 health and social care legislative proposals. The article concludes with a discussion of the implications of these system reforms for the founding principles of the NHS and the sustainability of the service.
Resumo:
Queueing Theory is the mathematical study of queues or waiting lines. Queues abound in every day life - in computer networks, in tra c islands, in communication of electro-magnetic signals, in telephone exchange, in bank counters, in super market checkouts, in doctor's clinics, in petrol pumps, in o ces where paper works to be processed and many other places. Originated with the published work of A. K. Erlang in 1909 [16] on congestion in telephone tra c, Queueing Theory has grown tremendously in a century. Its wide range applications includes Operations Research, Computer Science, Telecommunications, Tra c Engineering, Reliability Theory, etc.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
There is an increasing need to treat effluents contaminated with phenol with advanced oxidation processes (AOPs) to minimize their impact on the environment as well as on bacteriological populations of other wastewater treatment systems. One of the most promising AOPs is the Fenton process that relies on the Fenton reaction. Nevertheless, there are no systematic studies on Fenton reactor networks. The objective of this paper is to develop a strategy for the optimal synthesis of Fenton reactor networks. The strategy is based on a superstructure optimization approach that is represented as a mixed integer non-linear programming (MINLP) model. Network superstructures with multiple Fenton reactors are optimized with the objective of minimizing the sum of capital, operation and depreciation costs of the effluent treatment system. The optimal solutions obtained provide the reactor volumes and network configuration, as well as the quantities of the reactants used in the Fenton process. Examples based on a case study show that multi-reactor networks yield decrease of up to 45% in overall costs for the treatment plant. (C) 2010 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Stability of matchings was proved to be a new cooperative equilibrium concept in Sotomayor (Dynamics and equilibrium: essays in honor to D. Gale, 1992). That paper introduces the innovation of treating as multi-dimensional the payoff of a player with a quota greater than one. This is done for the many-to-many matching model with additively separable utilities, for which the stability concept is defined. It is then proved, via linear programming, that the set of stable outcomes is nonempty and it may be strictly bigger than the set of dual solutions and strictly smaller than the core. The present paper defines a general concept of stability and shows that this concept is a natural solution concept, stronger than the core concept, for a much more general coalitional game than a matching game. Instead of mutual agreements inside partnerships, the players are allowed to make collective agreements inside coalitions of any size and to distribute his labor among them. A collective agreement determines the level of labor at which the coalition operates and the division, among its members, of the income generated by the coalition. An allocation specifies a set of collective agreements for each player.