980 resultados para sequential reduction processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

After most of the native ant species are displaced by the Argentine ant invasion, it is probable that some ecological processes carried out by natives are not replaced. In some cases this could be due to a morphological difference between the Argentine ant and the displaced native ants. The significant decrease in ant richness after the invasion (only two species detected in the invaded zones vs. 25 species in surrounding non-invaded zones) implies a drastic reduction in the ant mandible gap range (the mandible gap spectra of all the ant species in a community) in the invaded zones. This reduction could explain why some roles that were previously carried out by the displaced native species are not performed by the invasive species. This could be due to a functional inability to carry out these activities. The mandible gap waspositively correlated with the ant body mass in the 26 ant species considered. The functional inability hypothesis could be applied to other invasive ants as well as to the Argentine ant

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the biological decolourisation of dyes with different molecular structures. The kinetic constant values (k1) achieved with azo dye Reactive Red 120 were 7.6 and 10.1 times higher in the presence of RM (redox mediators) AQDS and riboflavin, respectively, than the assays lacking RM. The kinetic constant achieved with the azo dye Congo Red was 42 times higher than that obtained with the anthraquinone dye Reactive Blue 4. The effect of RM on dye reduction was more evident for azo dyes resistant to reductive processes, and ineffective for anthraquinone dyes because of the structural stability of the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Palm oil is one of the two most important vegetable oils in the world's oil and fats market. The extraction and purification processes generate different kinds of waste generally known as palm oil mill effluent (POME). Earlier studies had indicated the possibility of using boiler fly ash to adsorb impurities and colour in POME treatment. The adsorption treatment of POME using boiler fly ash was further investigated in detail in this work with regards to the reduction of BOD, colour and TSS from palm oil mill effluent. The amount of BOD, colour and TSS adsorbed increased as the weight of the boiler fly ash used was increased. Also, the smaller particle size of 425µm adsorbed more than the 850µm size. Attempts were made to fit the experimental data with the Freundlich, Langmuir and Dubinin-Radushkevich isotherms. The R² values, which ranged from 0.8974-0.9898, 0.8848-0.9824 and 0.6235-0.9101 for Freundlich, Langmuir and Dubinin-Radushkevich isotherms respectively, showed that Freundlich isotherm gave a better fit followed by Langmuir and then Dubinin-Radushkevich isotherm. The sorption trend could be put as BOD > Colour > TSS. The apparent energy of adsorption was found to be 1.25, 0.58 and 0.97 (KJ/mol) for BOD, colour and TSS respectively, showing that sorption process occurs by physiosorption. Therefore, boiler fly ash is capable of reducing BOD, Colour and TSS from POME and hence could be used to develop a good adsorbent for POME treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delays in the justice system have been undermining the functioning and performance of the court system all over the world for decades. Despite the widespread concern about delays, the solutions have not kept up with the growth of the problem. The delay problem existing in the justice courts processes is a good example of the growing need and pressure in professional public organizations to start improving their business process performance.This study analyses the possibilities and challenges of process improvement in professional public organizations. The study is based on experiences gained in two longitudinal action research improvement projects conducted in two separate Finnish law instances; in the Helsinki Court of Appeal and in the Insurance Court. The thesis has two objectives. First objective is to study what kinds of factors in court system operations cause delays and unmanageable backlogs and how to reduce and prevent delays. Based on the lessons learned from the case projects the objective is to give new insights on the critical factors of process improvement conducted in professional public organizations. Four main areas and factors behind the delay problem is identified: 1) goal setting and performance measurement practices, 2) the process control system, 3) production and capacity planning procedures, and 4) process roles and responsibilities. The appropriate improvement solutions include tools to enhance project planning and scheduling and monitoring the agreed time-frames for different phases of the handling process and pending inventory. The study introduces the identified critical factors in different phases of process improvement work carried out in professional public organizations, the ways the critical factors can be incorporated to the different stages of the projects, and discusses the role of external facilitator in assisting process improvement work and in enhancing ownership towards the solutions and improvement. The study highlights the need to concentrate on the critical factors aiming to get the employees to challenge their existing ways of conducting work, analyze their own processes, and create procedures for diffusing the process improvement culture instead of merely concentrating of finding tools, techniques, and solutions appropriate for applications from the manufacturing sector

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sequential batch reactor with suspended biomass and useful volume of 5 L was used in the removal of nutrients and organic matter in workbench scale under optimal conditions obtained by central composite rotational design (CCRD), with cycle time (CT) of 16 h (10.15 h, aerobic phase, and 4.35 h, anoxic phase) and carbon: nitrogen ratio (COD/NO2--N+NO3--N) equal to 6. Complete cycles (20), nitrification followed by denitrification, were evaluated to investigate the kinetic behavior of degradation of organic (COD) and nitrogenated (NH4+-N, NO2--N and NO3--N) matter present in the effluent from a bird slaughterhouse and industrial processing facility, as well as to evaluate the stability of the reactor using Shewhart control charts of individual measures. The results indicate means total inorganic nitrogen (NH4+-N+NO2- -N+NO3--N) removal of 84.32±1.59% and organic matter (COD) of 53.65±8.48% in the complete process (nitrification-denitrification) with the process under statistical control. The nitrifying activity during the aerobic phase estimated from the determination of the kinetic parameters had mean K1 and K2 values of 0.00381±0.00043 min-1 and 0.00381±0.00043 min-1, respectively. The evaluation of the kinetic behavior of the conversion of nitrogen indicated a possible reduction of CT in the anoxic phase, since removals of NO2--N and NO3--N higher than 90% were obtained with only 1 h of denitrification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biogas production has considerable development possibilities not only in Finland but all over the world since it is the easiest way of creating value out of various waste fractions and represents an alternative source of renewable energy. Development of efficient biogas upgrading technology has become an important issue since it improves the quality of biogas and for example facilitating its injection into the natural gas pipelines. Moreover, such upgrading contributes to resolving the issue of increasing CO2 emissions and addresses the increasing climate change concerns. Together with traditional CO2 capturing technologies a new class of recently emerged sorbents such as ionic liquids is claimed as promising media for gas separations. In this thesis, an extensive comparison of the performance of different solvents in terms of CO2 capture has been performed. The focus of the present study was on aqueous amine solutions and their mixtures, traditional ionic liquids, ‘switchable’ ionic liquids and poly(ionic liquid)s in order to reveal the best option for biogas upgrading. The CO2 capturing efficiency for the most promising solvents achieved values around 50 - 60 L CO2 / L absorbent. These values are superior to currently widely applied water wash biogas upgrading system. Regeneration of the solvent mixtures appeared to be challenging since the loss of initial efficiency upon CO2 release was in excess of 20 - 40 vol %, especially in the case of aqueous amine solutions. In contrast, some of the ionic liquids displayed reversible behavior. Thus, for selected “switchable” ionic and poly(ionic liquid)s the CO2 absorption/regeneration cycles were performed 3 - 4 times without any notable efficiency decrease. The viscosity issue, typical for ionic liquids upon CO2 saturation, was addressed and the information obtained was evaluated and related to the ionic interactions. The occurrence of volatile organic compounds (VOCs) before and after biogas upgrading was studied for biogas produced through anaerobic digestion of waste waters sludge. The ionic liquid [C4mim][OAc] demonstrated its feasibility as a promising scrubbing media and exhibited high efficiency in terms of the removal of VOCs. Upon application of this ionic liquid, the amount of identified VOCs was diminished by around 65 wt %, while the samples treated with the aqueous mixture of 15 wt % N-methyldiethanolamine with addition of 5 wt % piperazine resulted in 32 wt % reduction in the amounts of volatile organic compounds only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The carcass fast freezing is one of the aspects of great prominence to the final quality of pork. In order to reduce weight loss, two experiments were performed, in which the carcasses were monitored during 20 hours to evaluate the main variables involved during two different freezing processes (standard and proposed) as follows: microbiological quality, storage temperature, relative humidity (RH) and air velocity. In experiment I, the carcasses were submitted to a system using heat shock (2 hours in static tunnel at - 25 °C) and subsequently sent to the equalization chamber. In experiment II, the carcasses were submitted to the heat shock and stored in a chamber with RH between 80-85%. The chambers used in both experiments showed no change in the variables studied (internal temperature of 5 °C and air velocity of approximately 0.3 m/s). However, the relative humidity in the three chambers was evaluated and significant differences were found; as a consequence, high levels of weight loss were observed in both chambers In experiment II there was an increase of RH, which reduced the weight loss of the carcasses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainability and recycling are core values in today’s industrial operations. New materials, products and processes need to be designed in such a way as to consume fewer of the diminishing resources we have available and to put as little strain on the environment as possible. An integral part of this is cleaning and recycling. New processes are to be designed to improve the efficiency in this aspect. Wastewater, including municipal wastewaters, is treated in several steps including chemical and mechanical cleaning of waters. Well-cleaned water can be recycled and reused. Clean water for everyone is one of the greatest challenges we are facing today. Ferric sulphate, made by oxidation from ferrous sulphate, is used in water purification. The oxidation of ferrous sulphate, FeSO4, to ferric sulphate in acidic aqueous solutions of H2SO4 over finely dispersed active carbon particles was studied in a vigorously stirred batch reactor. Molecular oxygen was used as the oxidation agent and several catalysts were screened: active carbon, active carbon impregnated with Pt, Rh, Pd and Ru. Both active carbon and noble metal-active carbon catalysts enhanced the oxidation rate considerably. The order of the noble metals according to the effect was: Pt >> Rh > Pd, Ru. By the use of catalysts, the production capacities of existing oxidation units can be considerably increased. Good coagulants have a high charge on a long polymer chain effectively capturing dirty particles of the opposite charge. Analysis of the reaction product indicated that it is possible to obtain polymeric iron-based products with good coagulation properties. Systematic kinetic experiments were carried out at the temperature and pressure ranges of 60B100°C and 4B10 bar, respectively. The results revealed that both non-catalytic and catalytic oxidation of Fe2+ to Fe3+ take place simultaneously. The experimental data were fitted to rate equations, which were based on a plausible reaction mechanism: adsorption of dissolved oxygen on active carbon, electron transfer from Fe2+ ions to adsorbed oxygen and formation of surface hydroxyls. A comparison of the Fe2+ concentrations predicted by the kinetic model with the experimentally observed concentrations indicated that the mechanistic rate equations were able to describe the intrinsic oxidation kinetics of Fe2+ over active carbon and active carbon-noble metal catalysts. Engineering aspects were closely considered and effort was directed to utilizing existing equipment in the production of the new coagulant. Ferrous sulphate can be catalytically oxidized to produce a novel long-chained polymeric iron-based flocculent in an easy and affordable way in existing facilities. The results can be used for modelling the reactors and for scale-up. Ferric iron (Fe3+) was successfully applied for the dissolution of sphalerite. Sphalerite contains indium, gallium and germanium, among others, and the application can promote their recovery. The understanding of the reduction process of ferric to ferrous iron can be used to develop further the understanding of the dissolution mechanisms and oxidation of ferrous sulphate. Indium, gallium and germanium face an ever-increasing demand in the electronics industry, among others. The supply is, however, very limited. The fact that most part of the material is obtained through secondary production means that real production quota depends on the primary material production. This also sets the pricing. The primary production material is in most cases zinc and aluminium. Recycling of scrap material and the utilization of industrial waste, containing indium, gallium and geranium, is a necessity without real options. As a part of this study plausible methods for the recovery of indium, gallium and germanium have been studied. The results were encouraging and provided information about the precipitation of these valuables from highly acidic solutions. Indium and gallium were separated from acidic sulphuric acid solutions by precipitation with basic sulphates such as alunite or they were precipitated as basic sulphates of their own as galliunite and indiunite. Germanium may precipitate as a basic sulphate of a mixed composition. The precipitation is rapid and the selectivity is good. When the solutions contain both indium and gallium then the results show that gallium should be separated before indium to achieve a better selectivity. Germanium was separated from highly acidic sulphuric acid solutions containing other metals as well by precipitating with tannic acid. This is a highly selective method. According to the study other commonly found metals in the solution do not affect germanium precipitation. The reduction of ferric iron to ferrous, the precipitation of indium, gallium and germanium, and the dissolution of the raw materials are strongly depending on temperature and pH. The temperature and pH effect were studied and which contributed to the understanding and design of the different process steps. Increased temperature and reduced pH improve the reduction rate. Finally, the gained understanding in the studied areas can be employed to develop better industrial processes not only on a large scale but also increasingly on a smaller scale. The small amounts of indium, gallium and germanium may favour smaller and more locally bound recovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to determine the effect of increased soil moisture levels on the decomposition processes in a peat-extracted bog. Field experiments, in which soil moisture levels were manipulated, were conducted using 320 microcosms in the Wainfleet Bog from May 2002 to November 2004. Decomposition was measured using litter bags and monitoring the abundance of macro invertebrate decomposers known as Collembola. Litter bags containing wooden toothpicks (n=2240), filter paper (n=480) and Betula pendula leaves (n=40) were buried in the soil and removed at regular time intervals up to one year. The results of the litter bag studies demonstrated a significant reduction of the decomposition of toothpicks (p<0.001), filter paper (p<0.001), and Betula pendula leaves (preductions in decomposition can be obtained by restoring the soil moisture levels near those of undisturbed conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Narrative therapy is a postmodern therapy that takes the position that people create self-narratives to make sense of their experiences. To date, narrative therapy has compiled virtually no quantitative and very little qualitative research, leaving gaps in almost all areas of process and outcome. White (2006a), one of the therapy's founders, has recently utilized Vygotsky's (1934/1987) theories of the zone of proximal development (ZPD) and concept formation to describe the process of change in narrative therapy with children. In collaboration with the child client, the narrative therapist formalizes therapeutic concepts and submits them to increasing levels of generalization to create a ZPD. This study sought to determine whether the child's development proceeds through the stages of concept formation over the course of a session, and whether therapists' utterances scaffold this movement. A sequential analysis was used due to its unique ability to measure dynamic processes in social interactions. Stages of concept formation and scaffolding were coded over time. A hierarchical log-linear analysis was performed on the sequential data to develop a model of therapist scaffolding and child concept development. This was intended to determine what patterns occur and whether the stated intent of narrative therapy matches its actual process. In accordance with narrative therapy theory, the log-linear analysis produced a final model with interactions between therapist and child utterances, and between both therapist and child utterances and time. Specifically, the child and youth participants in therapy tended to respond to therapist scaffolding at the corresponding level of concept formation. Both children and youth and therapists also tended to move away from earlier and toward later stages of White's scaffolding conversations map as the therapy session advanced. These findings provide support for White's contention that narrative therapists promote child development by scaffolding child concept formation in therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flow injection analysis (FIA) was applied to the determination of both chloride ion and mercury in water. Conventional FIA was employed for the chloride study. Investigations of the Fe3 +/Hg(SCN)2/CI-,450 nm spectrophotometric system for chloride determination led to the discovery of an absorbance in the 250-260 nm region when Hg(SCN)2 and CI- are combined in solution, in the absence of iron(III). Employing an in-house FIA system, absorbance observed at 254 nm exhibited a linear relation from essentially 0 - 2000 Jlg ml- 1 injected chloride. This linear range spanning three orders of magnitude is superior to the Fe3+/Hg(SCN)2/CI- system currently employed by laboratories worldwide. The detection limit obtainable with the proposed method was determin~d to be 0.16 Jlg ml- 1 and the relative standard deviation was determined to be 3.5 % over the concentration range of 0-200 Jig ml- 1. Other halogen ions were found to interfere with chloride determination at 254 nm whereas cations did not interfere. This system was successfully applied to the determination of chloride ion in laboratory water. Sequential injection (SI)-FIA was employed for mercury determination in water with the PSA Galahad mercury amalgamation, and Merlin mercury fluorescence detection systems. Initial mercury in air determinations involved injections of mercury saturated air directly into the Galahad whereas mercury in water determinations involved solution delivery via peristaltic pump to a gas/liquid separator, after reduction by stannous chloride. A series of changes were made to the internal hardware and valving systems of the Galahad mercury preconcentrator. Sequential injection solution delivery replaced the continuous peristaltic pump system and computer control was implemented to control and integrate all aspects of solution delivery, sample preconcentration and signal processing. Detection limits currently obtainable with this system are 0.1 ng ml-1 HgO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an explanation for why efficient reforms are not carried out when losers have the power to block their implementation, even though compensating them is feasible. We construct a signaling model with two-sided incomplete information in which a government faces the task of sequentially implementing two reforms by bargaining with interest groups. The organization of interest groups is endogenous. Compensations are distortionary and government types differ in the concern about distortions. We show that, when compensations are allowed to be informative about the government’s type, there is a bias against the payment of compensations and the implementation of reforms. This is because paying high compensations today provides incentives for some interest groups to organize and oppose subsequent reforms with the only purpose of receiving a transfer. By paying lower compensations, governments attempt to prevent such interest groups from organizing. However, this comes at the cost of reforms being blocked by interest groups with relatively high losses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse envisage un ensemble de méthodes permettant aux algorithmes d'apprentissage statistique de mieux traiter la nature séquentielle des problèmes de gestion de portefeuilles financiers. Nous débutons par une considération du problème général de la composition d'algorithmes d'apprentissage devant gérer des tâches séquentielles, en particulier celui de la mise-à-jour efficace des ensembles d'apprentissage dans un cadre de validation séquentielle. Nous énumérons les desiderata que des primitives de composition doivent satisfaire, et faisons ressortir la difficulté de les atteindre de façon rigoureuse et efficace. Nous poursuivons en présentant un ensemble d'algorithmes qui atteignent ces objectifs et présentons une étude de cas d'un système complexe de prise de décision financière utilisant ces techniques. Nous décrivons ensuite une méthode générale permettant de transformer un problème de décision séquentielle non-Markovien en un problème d'apprentissage supervisé en employant un algorithme de recherche basé sur les K meilleurs chemins. Nous traitons d'une application en gestion de portefeuille où nous entraînons un algorithme d'apprentissage à optimiser directement un ratio de Sharpe (ou autre critère non-additif incorporant une aversion au risque). Nous illustrons l'approche par une étude expérimentale approfondie, proposant une architecture de réseaux de neurones spécialisée à la gestion de portefeuille et la comparant à plusieurs alternatives. Finalement, nous introduisons une représentation fonctionnelle de séries chronologiques permettant à des prévisions d'être effectuées sur un horizon variable, tout en utilisant un ensemble informationnel révélé de manière progressive. L'approche est basée sur l'utilisation des processus Gaussiens, lesquels fournissent une matrice de covariance complète entre tous les points pour lesquels une prévision est demandée. Cette information est utilisée à bon escient par un algorithme qui transige activement des écarts de cours (price spreads) entre des contrats à terme sur commodités. L'approche proposée produit, hors échantillon, un rendement ajusté pour le risque significatif, après frais de transactions, sur un portefeuille de 30 actifs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.