987 resultados para REACTION-DIFFUSION PROBLEMS
Resumo:
We study the existence theory for parabolic variational inequalities in weighted L2 spaces with respect to excessive measures associated with a transition semigroup. We characterize the value function of optimal stopping problems for finite and infinite dimensional diffusions as a generalized solution of such a variational inequality. The weighted L2 setting allows us to cover some singular cases, such as optimal stopping for stochastic equations with degenerate diffusion coeficient. As an application of the theory, we consider the pricing of American-style contingent claims. Among others, we treat the cases of assets with stochastic volatility and with path-dependent payoffs.
Resumo:
We propose a mixed finite element method for a class of nonlinear diffusion equations, which is based on their interpretation as gradient flows in optimal transportation metrics. We introduce an appropriate linearization of the optimal transport problem, which leads to a mixed symmetric formulation. This formulation preserves the maximum principle in case of the semi-discrete scheme as well as the fully discrete scheme for a certain class of problems. In addition solutions of the mixed formulation maintain exponential convergence in the relative entropy towards the steady state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. We demonstrate the behavior of the proposed scheme with 2D simulations of the porous medium equations and blow-up questions in the Patlak-Keller-Segel model.
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
We consider an irreversible autocatalytic conversion reaction A+B->2A under subdiffusion described by continuous-time random walks. The reactants transformations take place independently of their motion and are described by constant rates. The analog of this reaction in the case of normal diffusion is described by the Fisher-Kolmogorov-Petrovskii-Piskunov equation leading to the existence of a nonzero minimal front propagation velocity, which is really attained by the front in its stable motion. We show that for subdiffusion, this minimal propagation velocity is zero, which suggests propagation failure.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
We analyze the diffusion of a Brownian particle in a fluid under stationary flow. By using the scheme of nonequilibrium thermodynamics in phase space, we obtain the Fokker-Planck equation that is compared with others derived from the kinetic theory and projector operator techniques. This equation exhibits violation of the fluctuation-dissipation theorem. By implementing the hydrodynamic regime described by the first moments of the nonequilibrium distribution, we find relaxation equations for the diffusion current and pressure tensor, allowing us to arrive at a complete description of the system in the inertial and diffusion regimes. The simplicity and generality of the method we propose makes it applicable to more complex situations, often encountered in problems of soft-condensed matter, in which not only one but more degrees of freedom are coupled to a nonequilibrium bath.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Improving the competitiveness of electrolytic Zinc process by chemical reaction engineering approach
Resumo:
This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
De nos jours, la voiture est devenue le mode de transport le plus utilisé, mais malheureusement, il est accompagné d’un certain nombre de problèmes (accidents, pollution, embouteillages, etc.), qui vont aller en s’aggravant avec l’augmentation prévue du nombre de voitures particulières, malgré les efforts très importants mis en œuvre pour tenter de les réduire ; le nombre de morts sur les routes demeure très important. Les réseaux sans fil de véhicules, appelés VANET, qui consistent de plusieurs véhicules mobiles sans infrastructure préexistante pour communiquer, font actuellement l’objet d'une attention accrue de la part des constructeurs et des chercheurs, afin d’améliorer la sécurité sur les routes ou encore les aides proposées aux conducteurs. Par exemple, ils peuvent avertir d’autres automobilistes que les routes sont glissantes ou qu’un accident vient de se produire. Dans VANET, les protocoles de diffusion (broadcast) jouent un rôle très important par rapport aux messages unicast, car ils sont conçus pour transmettre des messages de sécurité importants à tous les nœuds. Ces protocoles de diffusion ne sont pas fiables et ils souffrent de plusieurs problèmes, à savoir : (1) Tempête de diffusion (broadcast storm) ; (2) Nœud caché (hidden node) ; (3) Échec de la transmission. Ces problèmes doivent être résolus afin de fournir une diffusion fiable et rapide. L’objectif de notre recherche est de résoudre certains de ces problèmes, tout en assurant le meilleur compromis entre fiabilité, délai garanti, et débit garanti (Qualité de Service : QdS). Le travail de recherche de ce mémoire a porté sur le développement d’une nouvelle technique qui peut être utilisée pour gérer le droit d’accès aux médias (protocole de gestion des émissions), la gestion de grappe (cluster) et la communication. Ce protocole intègre l'approche de gestion centralisée des grappes stables et la transmission des données. Dans cette technique, le temps est divisé en cycles, chaque cycle est partagé entre les canaux de service et de contrôle, et divisé en deux parties. La première partie s’appuie sur TDMA (Time Division Multiple Access). La deuxième partie s’appuie sur CSMA/CA (Carrier Sense Multiple Access / Collision Avoidance) pour gérer l’accès au medium. En outre, notre protocole ajuste d’une manière adaptative le temps consommé dans la diffusion des messages de sécurité, ce qui permettra une amélioration de la capacité des canaux. Il est implanté dans la couche MAC (Medium Access Control), centralisé dans les têtes de grappes (CH, cluster-head) qui s’adaptent continuellement à la dynamique des véhicules. Ainsi, l’utilisation de ce protocole centralisé nous assure une consommation efficace d’intervalles de temps pour le nombre exact de véhicules actifs, y compris les nœuds/véhicules cachés; notre protocole assure également un délai limité pour les applications de sécurité, afin d’accéder au canal de communication, et il permet aussi de réduire le surplus (overhead) à l’aide d’une propagation dirigée de diffusion.
Resumo:
Cette recherche s'intéresse à l'acteur patronal organisé, encore peu étudié en Amérique du Nord. Pourtant, cet acteur est fortement organisé au Québec et il exerce une influence reconnue sur les politiques publiques et les relations industrielles. Cette recherche vise à mieux comprendre la logique d’action des employeurs et les lieux où ils exercent leur influence. Plus important encore, la recherche s’interroge sur les mécanismes de diffusion utilisés par les associations patronales pour transmettre à leurs membres des orientations et des lignes directrices à adopter. Tout comme pour l’acteur syndical qui doit développer sa capacité représentative (Dufour, Hege, Levesque et Murray, 2009), nous croyons qu’il en est de même pour l’acteur patronal. Bref, cette étude cherche à comprendre comment les associations patronales vont s'assurer que leurs membres adoptent des pratiques en lien avec les positions défendues dans les institutions du marché du travail et dans la sphère des politiques publiques. Notre question de recherche est la suivante : Quels sont les mécanismes développés par les associations patronales pour diffuser leurs orientations en matière de politiques publiques et de relations du travail en vue d’influencer les pratiques locales de gestion de leurs membres? Au plan théorique, cette étude mobilise les idées développées par les approches néo-institutionnalistes pour mieux expliquer comment les acteurs vont utiliser les institutions en place pour façonner les règles dans leurs intérêts, ce qui suppose d’abord une capacité de représentation et une cohérence dans les actions entre les niveaux où se situent l’acteur. On cherche à comprendre comment les associations peuvent coordonner les actions patronales en réaction aux changements qui s’opèrent dans l’environnement institutionnel. Les associations patronales sont des entrepreneurs institutionnels (Crouch, 2005) qui sont à la recherche active d’opportunités et de leviers de pouvoir à utiliser pour maximiser leurs intérêts de leurs membres et par la même occasion, réduire les incertitudes en provenance de l’environnement (Campbell, 2004; Streeck et Thelen, 2005; Crouch, 2005). Toujours au niveau théorique, cette étude se base sur les idées avancées par la sociologie des logiques d’action. Cette approche théorique nous permet de rendre compte des niveaux sectoriel et local où s’enracinent les comportements des employeurs. Au niveau sectoriel, il existe une pluralité d’instances qui contribuent à façonner les logiques d’actions des associations patronales. La sociologie des logiques d’actions nous permet d’envisager l’association patronale comme un groupe qui dispose d’une vie qui lui est propre avec une relative autonomie de fonctionnement. La capacité d’influence de l’association serait tributaire des mécanismes de coordination de l’action utilisés pour susciter l’accord au sein du groupe. Les mécanismes de coordination de l’action devraient permettre une connexion régulière et stable entre l’association et ses membres. Cette recherche s’intéresse aux associations patronales qui ont recours à un ensemble de moyens pour diffuser les orientations privilégiées aux entreprises membres. Au plan empirique, cette recherche propose de répondre aux trois objectifs suivants : (1) mieux comprendre les formes d’organisation patronales dans les mines au Québec; (2) mieux saisir la structure et la logique d’action des associations patronales sur les politiques publiques, les relations de travail et le marché du travail et finalement (3) mieux comprendre les mécanismes développés par les associations patronales pour diffuser leurs orientations en vue d’influencer les pratiques locales de gestion de leurs membres. Pour atteindre nos objectifs de recherche, nous avons utilisé une méthodologie qualitative de recherche soit une étude de cas du secteur des mines au Québec. Cette dernière a été conduite en trois étapes : la préparation, la collecte des données et l’interprétation (Merriam, 1998). Les données de cette étude ont été recueillies à l’hiver 2012, par le biais d’entretiens semi-directifs auprès de gestionnaires d’entreprises minières et de dirigeants d’associations minières. Une analyse qualitative du contenu de ces entrevues a été effectuée en lien avec la revue de littérature et nos propositions de recherche. À cette fin, nous avons utilisé la technique de l’appariement logique de Yin (1994), ce qui nous a permis de comparer nos observations à nos propositions de recherche. Au niveau des résultats, nous avons pu constater que les associations patronales du secteur des mines au Québec, endossent davantage le rôle de porte-parole de l’industrie auprès du gouvernement que celui de développeur de services aux membres. Les actions des associations patronales s’exercent à tous les niveaux décisionnels afin d’assurer la meilleure promotion possible des intérêts des employeurs. La représentation politique représente le champ d’activité le plus important qui compose la logique d’action des associations patronales de la filière minérale québécoise. Mentionnons également que la représentation des intérêts des entreprises auprès du public et des médias est également vitale à l’action collective patronale dans un souci d’acceptabilité sociale. Les associations d’employeurs vont tenter principalement d’influencer les pratiques en relations industrielles qui permettent d’assurer une meilleure image de l’industrie et qui sont jugées prioritaires en fonction du contexte institutionnel en place. La recherche nous a permis d’observer un impact favorable et significatif à la capacité de diffusion pour cinq des sept mécanismes de diffusion faisant partie de notre modèle d’analyse. Trois de ces cinq mécanismes favorisent la capacité de diffusion descendante (transposition de la logique d’action sectorielle sur les pratiques locales des membres) et les deux autres favorisent plutôt la capacité de diffusion ascendante (transposition des enjeux locaux jugés prioritaires sur la logique d’action sectorielle). Les mécanismes qui supportent au mieux la cohésion au sein de l’association sont ceux qui impliquent une relation dynamique entre les représentants et les membres et entre les membres eux-mêmes d’où la pertinence d’une diffusion descendante et ascendante des orientations. Il est à noter qu’étant donné que cette recherche consiste en une étude de cas, des limites méthodologiques liées à la généralisation des résultats sont présentes. Il n’est pas aisé d’affirmer que les résultats de cette microanalyse soient généralisables en raison des spécificités du secteur à l’étude. En contrepartie, les analyses ont servi à l’élaboration d’un modèle qui pourra être utilisé dans des études futures.