107 resultados para Causal networks methodology
Resumo:
This paper considers a general and informationally efficient approach to determine the optimal access pricing rule for interconnected networks. It shows that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. The approach is informationally efficient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique rule that implements the Ramsey outcome as the unique equilibrium independently of the underlying demand conditions.
Resumo:
While markets are often decentralized, in many other cases agents in one role can only negotiate with a proper subset of the agents in the complementary role. There may be proximity issues or restricted communication flows. For example, information may be transmitted only through word-of-mouth, as is often the case for job openings, business opportunities, and confidential transactions. Bargaining can be considered to occur over a network that summarizes the structure of linkages among people. We conduct an alternating-offer bargaining experiment using separate simple networks, which are then joined during the session by an additional link. The results diverge sharply depending on how this connection is made. Payoffs can be systematically affected even for agents who are not connected by the new link. We use a graph-theoretic analysis to show that any two-sided network can be decomposed into simple networks of three types, so that our result can be generalized to more complex bargaining environments. Participants appear to grasp the essential characteristics of the networks and we observe a rather consistently high level of bargaining efficiency.
Resumo:
We address the performance optimization problem in a single-stationmulticlass queueing network with changeover times by means of theachievable region approach. This approach seeks to obtainperformance bounds and scheduling policies from the solution of amathematical program over a relaxation of the system's performanceregion. Relaxed formulations (including linear, convex, nonconvexand positive semidefinite constraints) of this region are developedby formulating equilibrium relations satisfied by the system, withthe help of Palm calculus. Our contributions include: (1) newconstraints formulating equilibrium relations on server dynamics;(2) a flow conservation interpretation of the constraintspreviously derived by the potential function method; (3) newpositive semidefinite constraints; (4) new work decomposition lawsfor single-station multiclass queueing networks, which yield newconvex constraints; (5) a unified buffer occupancy method ofperformance analysis obtained from the constraints; (6) heuristicscheduling policies from the solution of the relaxations.
Resumo:
We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.
Resumo:
Two finite extensive-form games are empirically equivalent when theempirical distribution on action profiles generated by every behaviorstrategy in one can also be generated by an appropriately chosen behaviorstrategy in the other. This paper provides a characterization ofempirical equivalence. The central idea is to relate a game's informationstructure to the conditional independencies in the empirical distributionsit generates. We present a new analytical device, the influence opportunitydiagram of a game, describe how such a diagram is constructed for a givenextensive-form game, and demonstrate that it provides a complete summaryof the information needed to test empirical equivalence between two games.
Resumo:
We formulate a knowlegde--based model of direct investment through mergers and acquisitions. M&As are realized to create comparative advantages by exploiting international synergies and appropriating local technology spillovers requiring geographical proximity, but can also represent a strategic response to the presence of a multinational rival. The takeover fee paid tends to increase with the strength of local spillovers which can thus work against multinationalization. Seller's bargaining power increases the takeover fee, but does not influence the investment decision. We characterize losers and winners from multinationalization, and show that foreign investment stimulates research but could result in a synergy trap reducing multinationals' profits.
Resumo:
Accomplish high quality of final products in pharmaceutical industry is a challenge that requires the control and supervision of all the manufacturing steps. This request created the necessity of developing fast and accurate analytical methods. Near infrared spectroscopy together with chemometrics, fulfill this growing demand. The high speed providing relevant information and the versatility of its application to different types of samples lead these combined techniques as one of the most appropriated. This study is focused on the development of a calibration model able to determine amounts of API from industrial granulates using NIR, chemometrics and process spectra methodology.
Resumo:
This paper describes an optimized model to support QoS by mean of Congestion minimization on LSPs (Label Switching Path). In order to perform this model, we start from a CFA (Capacity and Flow Allocation) model. As this model does not consider the buffer size to calculate the capacity cost, our model- named BCA (Buffer Capacity Allocation)- take into account this issue and it improve the CFA performance. To test our proposal, we perform several simulations; results show that BCA model minimizes LSP congestion and uniformly distributes flows on the network
Resumo:
This work proposes novel network analysis techniques for multivariate time series.We define the network of a multivariate time series as a graph where verticesdenote the components of the process and edges denote non zero long run partialcorrelations. We then introduce a two step LASSO procedure, called NETS, toestimate high dimensional sparse Long Run Partial Correlation networks. This approachis based on a VAR approximation of the process and allows to decomposethe long run linkages into the contribution of the dynamic and contemporaneousdependence relations of the system. The large sample properties of the estimatorare analysed and we establish conditions for consistent selection and estimation ofthe non zero long run partial correlations. The methodology is illustrated with anapplication to a panel of U.S. bluechips.
Resumo:
During the period 1996-2000, forty-three heavy rainfall events have been detected in the Internal Basins of Catalonia (Northeastern of Spain). Most of these events caused floods and serious damage. This high number leads to the need for a methodology to classify them, on the basis of their surface rainfall distribution, their internal organization and their physical features. The aim of this paper is to show a methodology to analyze systematically the convective structures responsible of those heavy rainfall events on the basis of the information supplied by the meteorological radar. The proposed methodology is as follows. Firstly, the rainfall intensity and the surface rainfall pattern are analyzed on the basis of the raingauge data. Secondly, the convective structures at the lowest level are identified and characterized by using a 2-D algorithm, and the convective cells are identified by using a 3-D procedure that looks for the reflectivity cores in every radar volume. Thirdly, the convective cells (3-D) are associated with the 2-D structures (convective rainfall areas). This methodology has been applied to the 43 heavy rainfall events using the meteorological radar located near Barcelona and the SAIH automatic raingauge network.
Resumo:
My research in live drawing and new technologies uses a combination of a human figure in live in composition, overlaid with a digital projection of a second human figure. The aim is to explore, to amplify and thoroughly analyse the search for distinctive identities and graphic languages of representation for live and projected models.
Resumo:
Capital intensive industries in specialized niches of production have constituted solid ground for family firms in Spain , as evidenced by the experience of the iron and steel wire industries between 1870 and 2000. The embeddedness of these firms in their local and regional environments have allowed the creation of networks that, together with favourable institutional conditions, significantly explain the dominance of family entrepreneurship in iron and steel wire manufacturing in Spain, until the end of the 20 th century. Dominance of family firms at the regional level has not been not an obstacle for innovation in wire manufacturing in Spain, which has taken place even when institutional conditions blocked innovation and traditional networking. Therefore, economic theories about the difficulties dynastic family firms may have to perform appropriately in science-based industries must be questioned
Resumo:
En este artículo abordamos el uso y la importancia de las herramientas estadísticas que se utilizan principalmente en los estudios médicos del ámbito de la oncología y la hematología, pero aplicables a muchos otros campos tanto médicos como experimentales o industriales. El objetivo del presente trabajo es presentar de una manera clara y precisa la metodología estadística necesaria para analizar los datos obtenidos en los estudios rigurosa y concisamente en cuanto a las hipótesis de trabajo planteadas por los investigadores. La medida de la respuesta al tratamiento elegidas en al tipo de estudio elegido determinarán los métodos estadísticos que se utilizarán durante el análisis de los datos del estudio y también el tamaño de muestra. Mediante la correcta aplicación del análisis estadístico y de una adecuada planificación se puede determinar si la relación encontrada entre la exposición a un tratamiento y un resultado es casual o por el contrario, está sujeto a una relación no aleatoria que podría establecer una relación de causalidad. Hemos estudiado los principales tipos de diseño de los estudios médicos más utilizados, tales como ensayos clínicos y estudios observacionales (cohortes, casos y controles, estudios de prevalencia y estudios ecológicos). También se presenta una sección sobre el cálculo del tamaño muestral de los estudios y cómo calcularlo, ¿Qué prueba estadística debe utilizarse?, los aspectos sobre fuerza del efecto ¿odds ratio¿ (OR) y riesgo relativo (RR), el análisis de supervivencia. Se presentan ejemplos en la mayoría de secciones del artículo y bibliografía más relevante.
Resumo:
Two trends which presently exist in relation to the concept of Paleontology are analyzed, pointing out some of the aspects which negative influence. Various reflections are made based on examples of some of the principal points of paleontological method, such as the influence of a punctual sampling, the meaning of size-frequency distribution and subjectivity in the identification of fossils. Topics which have a marked repercussion in diverse aspects of Paleontology are discussed.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.