966 resultados para Dynamic processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book is one out of 8 IAEG XII Congress volumes, and deals with Landslide processes, including: field data and monitoring techniques, prediction and forecasting of landslide occurrence, regional landslide inventories and dating studies, modeling of slope instabilities and secondary hazards (e.g. impulse waves and landslide-induced tsunamis, landslide dam failures and breaching), hazard and risk assessment, earthquake and rainfall induced landslides, instabilities of volcanic edifices, remedial works and mitigation measures, development of innovative stabilization techniques and applicability to specific engineering geological conditions, use of geophysical techniques for landslide characterization and investigation of triggering mechanisms. Focuses is given to innovative techniques, well documented case studies in different environments, critical components of engineering geological and geotechnical investigations, hydrological and hydrogeological investigations, remote sensing and geophysical techniques, modeling of triggering, collapse, runout and landslide reactivation, geotechnical design and construction procedures in landslide zones, interaction of landslides with structures and infrastructures and possibility of domino effects. The Engineering Geology for Society and Territory volumes of the IAEG XII Congress held in Torino from September 15-19, 2014, analyze the dynamic role of engineering geology in our changing world and build on the four main themes of the congress: environment, processes, issues, and approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

COD discharges out of processes have increased in line with elevating brightness demands for mechanical pulp and papers. The share of lignin-like substances in COD discharges is on average 75%. In this thesis, a plant dynamic model was created and validated as a means to predict COD loading and discharges out of a mill. The assays were carried out in one paper mill integrate producing mechanical printing papers. The objective in the modeling of plant dynamics was to predict day averages of COD load and discharges out of mills. This means that online data, like 1) the level of large storage towers of pulp and white water 2) pulp dosages, 3) production rates and 4) internal white water flows and discharges were used to create transients into the balances of solids and white water, referred to as “plant dynamics”. A conversion coefficient was verified between TOC and COD. The conversion coefficient was used for predicting the flows from TOC to COD to the waste water treatment plant. The COD load was modeled with similar uncertainty as in reference TOC sampling. The water balance of waste water treatment was validated by the reference concentration of COD. The difference of COD predictions against references was within the same deviation of TOC-predictions. The modeled yield losses and retention values of TOC in pulping and bleaching processes and the modeled fixing of colloidal TOC to solids between the pulping plant and the aeration basin in the waste water treatment plant were similar to references presented in literature. The valid water balances of the waste water treatment plant and the reduction model of lignin-like substances produced a valid prediction of COD discharges out of the mill. A 30% increase in the release of lignin-like substances in the form of production problems was observed in pulping and bleaching processes. The same increase was observed in COD discharges out of waste water treatment. In the prediction of annual COD discharge, it was noticed that the reduction of lignin has a wide deviation from year to year and from one mill to another. This made it difficult to compare the parameters of COD discharges validated in plant dynamic simulation with another mill producing mechanical printing papers. However, a trend of moving from unbleached towards high-brightness TMP in COD discharges was valid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis is to enhance the understanding about the management of the front end phases of the innovation process in a networked environment. The thesis approaches the front end of innovation from three perspectives, including the strategy, processes and systems of innovation. The purpose of the use of different perspectives in the thesis is that of providing an extensive systemic view of the front end, and uncovering the complex nature of innovation management. The context of the research is the networked operating environment of firms. The unit of analysis is the firm itself or its innovation processes, which means that this research approaches the innovation networks from the point of view of a firm. The strategy perspective of the thesis emphasises the importance of purposeful innovation management, the innovation strategy of firms. The role of innovation processes is critical in carrying out innovation strategies in practice, supporting the development of organizational routines for innovation, and driving the strategic renewal of companies. The primary focus of the thesis from systems perspective is on idea management systems, which are defined as a part of innovation management systems, and defined for this thesis as any working combination of methodology and tools (manual or IT-supported) that enhance the management of innovations within their early phases. The main contribution of the thesis are the managerial frameworks developed for managing the front end of innovation, which purposefully “wire” the front end of innovation into the strategy and business processes of a firm. The thesis contributes to modern innovation management by connecting the internal and external collaboration networks as foundational elements for successful management of the early phases of innovation processes in a dynamic environment. The innovation capability of a firm is largely defined by its ability to rely on and make use of internal and external collaboration already during the front end activities, which by definition include opportunity identification and analysis, idea generation, profileration and selection, and concept definition. More specifically, coordination of the interfaces between these activities, and between the internal and external innovation environments of a firm is emphasised. The role of information systems, in particular idea management systems, is to support and delineate the innovation-oriented behaviour and interaction of individuals and organizations during front end activities. The findings and frameworks developed in the thesis can be used by companies for purposeful promotion of their front end processes. The thesis provides a systemic strategy framework for managing the front end of innovation – not as a separate process, but as an elemental bundle ofactivities that is closely linked to the overall innovation process and strategy of a firm in a distributed environment. The theoretical contribution of the thesis relies on the advancement of the open innovation paradigm in the strategic context of a firm within its internal and external innovation environments. This thesis applies the constructive research approach and case study methodology to provide theoretically significant results, which are also practically beneficial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evergreen trees in the Mediterranean region must cope with a wide range of environmental stresses from summer drought to winter cold. The mildness of Mediterranean winters can periodically lead to favourable environmental conditions above the threshold for a positive carbon balance, benefitting evergreen woody species more than deciduous ones. The comparatively lower solar energy input in winter decreases the foliar light saturation point. This leads to a higher susceptibility to photoinhibitory stress especially when chilly (< 12 C) or freezing temperatures (< 0 C) coincide with clear skies and relatively high solar irradiances. Nonetheless, the advantage of evergreen species that are able to photosynthesize all year round where a significant fraction can be attributed to winter months, compensates for the lower carbon uptake during spring and summer in comparison to deciduous species. We investigated the ecophysiological behaviour of three co-occurring mature evergreen tree species (Quercus ilex L., Pinus halepensis Mill., and Arbutus unedo L.). Therefore, we collected twigs from the field during a period of mild winter conditions and after a sudden cold period. After both periods, the state of the photosynthetic machinery was tested in the laboratory by estimating the foliar photosynthetic potential with CO2 response curves in parallel with chlorophyll fluorescence measurements. The studied evergreen tree species benefited strongly from mild winter conditions by exhibiting extraordinarily high photosynthetic potentials. A sudden period of frost, however, negatively affected the photosynthetic apparatus, leading to significant decreases in key physiological parameters such as the maximum carboxylation velocity (Vc,max), the maximum photosynthetic electron transport rate (Jmax), and the optimal fluorometric quantum yield of photosystem II (Fv/Fm). The responses of Vc,max and Jmax were highly species specific, with Q. ilex exhibiting the highest and P. halepensis the lowest reductions. In contrast, the optimal fluorometric quantum yield of photosystem II (Fv/Fm) was significantly lower in A. unedo after the cold period. The leaf position played an important role in Q. ilex showing a stronger winter effect on sunlit leaves in comparison to shaded leaves. Our results generally agreed with the previous classifications of photoinhibition-tolerant (P. halepensis) and photoinhibitionavoiding (Q. ilex) species on the basis of their susceptibility to dynamic photoinhibition, whereas A. unedo was the least tolerant to photoinhibition, which was chronic in this species. Q. ilex and P. halepensis seem to follow contrasting photoprotective strategies. However, they seemed equally successful under the prevailing conditions exhibiting an adaptive advantage over A. unedo. These results show that our understanding of the dynamics of interspecific competition in Mediterranean ecosystems requires consideration of the physiological behaviour during winter which may have important implications for long-term carbon budgets and growth trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today´s organizations must have the ability to react to rapid changes in the market. These rapid changes cause pressure to continuously find new efficient ways to organize work practices. Increased competition requires businesses to become more effective and to pay attention to quality of management and to make people to understand their work's impact on the final result. The fundamentals in continmuois improvement are systematic and agile tackling of indentified individual process constraints and the fact tha nothin finally improves without changes. Successful continuous improvement requires management commitment, education, implementation, measurement, recognition and regeneration. These ingredients form the foundation, both for breakthrough projects and small step ongoing improvement activities. One part of the organization's management system are the quality tools, which provide systematic methodologies for identifying problems, defining their root causes, finding solutions, gathering and sorting of data, supporting decision making and implementing the changes, and many other management tasks. Organizational change management includes processes and tools for managing the people in an organizational level change. These tools include a structured approach, which can be used for effective transition of organizations through change. When combined with the understanding of change management of individuals, these tools provide a framework for managing people in change,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The consumption of manganese is increasing, but huge amounts of manganese still end up in waste in hydrometallurgical processes. The recovery of manganese from multi-metal solutions at low concentrations may not be economical. In addition, poor iron control typically prevents the production of high purity manganese. Separation of iron from manganese can be done with chemical precipitation or solvent extraction methods. Combined carbonate precipitation with air oxidation is a feasible method to separate iron and manganese due to the fast kinetics, good controllability and economical reagents. In addition the leaching of manganese carbonate is easier and less acid consuming than that of hydroxide or sulfide precipitates. Selective iron removal with great efficiency from MnSO4 solution is achieved by combined oxygen or air oxidation and CaCO3 precipitation at pH > 5.8 and at a redox potential of > 200 mV. In order to avoid gypsum formation, soda ash should be used instead of limestone. In such case, however, extra attention needs to be paid on the reagents mole ratios in order to avoid manganese coprecipitation. After iron removal, pure MnSO4 solution was obtained by solvent extraction using organophosphorus reagents, di-(2-ethylhexyl)phosphoric acid (D2EHPA) and bis(2,4,4- trimethylpentyl)phosphinic acid (CYANEX 272). The Mn/Ca and Mn/Mg selectivities can be increased by decreasing the temperature from the commonly used temperatures (40 –60oC) to 5oC. The extraction order of D2EHPA (Ca before Mn) at low temperature remains unchanged but the lowering of temperature causes an increase in viscosity and slower phase separation. Of these regents, CYANEX 272 is selective for Mn over Ca and, therefore, it would be the better choice if there is Ca present in solution. A three-stage Mn extraction followed by a two-stage scrubbing and two-stage sulfuric acid stripping is an effective method of producing a very pure MnSO4 intermediate solution for further processing. From the intermediate MnSO4 some special Mn- products for ion exchange applications were synthesized and studied. Three types of octahedrally coordinated manganese oxide materials as an alternative final product for manganese were chosen for synthesis: layer structured Nabirnessite, tunnel structured Mg-todorokite and K-kryptomelane. As an alternative source of pure MnSO4 intermediate, kryptomelane was synthesized by using a synthetic hydrometallurgical tailings. The results show that the studied OMS materials adsorb selectively Cu, Ni, Cd and K in the presence of Ca and Mg. It was also found that the exchange rates were reasonably high due to the small particle dimensions. Materials are stable in the studied conditions and their maximum Cu uptake capacity was 1.3 mmol/g. Competitive uptake of metals and acid was studied using equilibrium, batch kinetic and fixed-bed measurements. The experimental data was correlated with a dynamic model, which also accounts for the dissolution of the framework manganese. Manganese oxide micro-crystals were also bound onto silica to prepare a composite material having a particle size large enough to be used in column separation experiments. The MnOx/SiO2 ratio was found to affect significantly the properties of the composite. The higher the ratio, the lower is the specific surface area, the pore volume and the pore size. On the other hand, higher amount of silica binder gives composites better mechanical properties. Birnesite and todorokite can be aggregated successfully with colloidal silica at pH 4 and with MnO2/SiO2 weight ratio of 0.7. The best gelation and drying temperature was 110oC and sufficiently strong composites were obtained by additional heat-treatment at 250oC for 2 h. The results show that silica–supported MnO2 materials can be utilized to separate copper from nickel and cadmium. The behavior of the composites can be explained reasonably well with the presented model and the parameters estimated from the data of the unsupported oxides. The metal uptake capacities of the prepared materials were quite small. For example, the final copper loading was 0.14 mmol/gMnO2. According to the results the special MnO2 materials are potential for a specific environmental application to uptake harmful metal ions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing amount of renewable energy source based electricity production has set high load control requirements for power grid balance markets. The essential grid balance between electricity consumption and generation is currently hard to achieve economically with new-generation solutions. Therefore conventional combustion power generation will be examined in this thesis as a solution to the foregoing issue. Circulating fluidized bed (CFB) technology is known to have sufficient scale to acts as a large grid balancing unit. Although the load change rate of the CFB unit is known to be moderately high, supplementary repowering solution will be evaluated in this thesis for load change maximization. The repowering heat duty is delivered to the CFB feed water preheating section by smaller gas turbine (GT) unit. Consequently, steam extraction preheating may be decreased and large amount of the gas turbine exhaust heat may be utilized in the CFB process to reach maximum plant electrical efficiency. Earlier study of the repowering has focused on the efficiency improvements and retrofitting to maximize plant electrical output. This study however presents the CFB load change improvement possibilities achieved with supplementary GT heat. The repowering study is prefaced with literature and theory review for both of the processes to maximize accuracy of the research. Both dynamic and steady-state simulations accomplished with APROS simulation tool will be used to evaluate repowering effects to the CFB unit operation. Eventually, a conceptual level analysis is completed to compare repowered plant performance to the state-of-the-art CFB performance. Based on the performed simulations, considerably good improvements to the CFB process parameters are achieved with repowering. Consequently, the results show possibilities to higher ramp rate values achieved with repowered CFB technology. This enables better plant suitability to the grid balance markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Highly dynamic systems, often considered as resilient systems, are characterised by abiotic and biotic processes under continuous and strong changes in space and time. Because of this variability, the detection of overlapping anthropogenic stress is challenging. Coastal areas harbour dynamic ecosystems in the form of open sandy beaches, which cover the vast majority of the world’s ice-free coastline. These ecosystems are currently threatened by increasing human-induced pressure, among which mass-development of opportunistic macroalgae (mainly composed of Chlorophyta, so called green tides), resulting from the eutrophication of coastal waters. The ecological impact of opportunistic macroalgal blooms (green tides, and blooms formed by other opportunistic taxa), has long been evaluated within sheltered and non-tidal ecosystems. Little is known, however, on how more dynamic ecosystems, such as open macrotidal sandy beaches, respond to such stress. This thesis assesses the effects of anthropogenic stress on the structure and the functioning of highly dynamic ecosystems using sandy beaches impacted by green tides as a study case. The thesis is based on four field studies, which analyse natural sandy sediment benthic community dynamics over several temporal (from month to multi-year) and spatial (from local to regional) scales. In this thesis, I report long-lasting responses of sandy beach benthic invertebrate communities to green tides, across thousands of kilometres and over seven years; and highlight more pronounced responses of zoobenthos living in exposed sandy beaches compared to semi-exposed sands. Within exposed sandy sediments, and across a vertical scale (from inshore to nearshore sandy habitats), I also demonstrate that the effects of the presence of algal mats on intertidal benthic invertebrate communities is more pronounced than that on subtidal benthic invertebrate assemblages, but also than on flatfish communities. Focussing on small-scale variations in the most affected faunal group (i.e. benthic invertebrates living at low shore), this thesis reveals a decrease in overall beta-diversity along a eutrophication-gradient manifested in the form of green tides, as well as the increasing importance of biological variables in explaining ecological variability of sandy beach macrobenthic assemblages along the same gradient. To illustrate the processes associated with the structural shifts observed where green tides occurred, I investigated the effects of high biomasses of opportunistic macroalgae (Ulva spp.) on the trophic structure and functioning of sandy beaches. This work reveals a progressive simplification of sandy beach food web structure and a modification of energy pathways over time, through direct and indirect effects of Ulva mats on several trophic levels. Through this thesis I demonstrate that highly dynamic systems respond differently (e.g. shift in δ13C, not in δ15N) and more subtly (e.g. no mass-mortality in benthos was found) to anthropogenic stress compared to what has been previously shown within more sheltered and non-tidal systems. Obtaining these results would not have been possible without the approach used through this work; I thus present a framework coupling field investigations with analytical approaches to describe shifts in highly variable ecosystems under human-induced stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social interactions arguably provide a rationale for several important phenomena, from smoking and other risky behavior in teens to e.g., peer effects in school performance. We study social interactions in dynamic economies. For these economies, we provide existence (Markov Perfect Equilibrium in pure strategies), ergodicity, and welfare results. Also, we characterize equilibria in terms of agents' policy function, spatial equilibrium correlations and social multiplier effects, depending on the nature of interactions. Most importantly, we study formally the issue of the identification of social interactions, with special emphasis on the restrictions imposed by dynamic equilibrium conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we have formulated the Dalgarno-Lewis procedure for two-and three-photon processes and an elegant alternate expressions are derived. Starting from a brief review on various multiphoton processes we have discussed the difficulties coming in the perturbative treatment of multiphoton processes. A small discussion on various available methods for studying multiphoton processes are presented in chapter 2. These theoretical treatments mainly concentrate on the evaluation of the higher order matrix elements coming in the perturbation theory. In chapter 3 we have described the use of Dalgarno-Lewis procedure and its implimentation on second order matrix elements. The analytical expressions for twophoton transition amplitude, two-photon ionization cross section, dipole dynamic polarizability and Kramers-Heiseberg are obtained in a unified manner. Fourth chapter is an extension of the implicit summation technique presented in chapter 3. We have clearly mentioned the advantage of our method, especially the analytical continuation of the relevant expressions suited for various values of radiation frequency which is also used for efficient numerical analysis. A possible extension of the work is to study various multiphoton processcs from the stark shifted first excited states of hydrogen atom. We can also extend this procedure for studying multiphoton processes in alkali atoms as well as Rydberg atoms. Also, instead of going for analytical expressions, one can try a complete numerical evaluation of the higher order matrix elements using this procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensification processes in homegardens of the Nuba Mountains, Sudan, raise concerns about strongly positive carbon (C) and nutrient balances which are expected to lead to substantial element losses from these agroecosystems, in particular via soil gaseous emissions. Therefore, this thesis aimed at the quantification of C, nitrogen (N), phosphorus (P) and potassium (K) input and output fluxes with a special focus on soil gaseous losses, and the calculation of respective element balances. A further focus in this thesis was rainfall, a valuable resource for rain-fed agriculture in the Nuba Mountains. To minimize negative consequences of the high variability of rainfall, risk reducing mechanisms were developed by rain-fed farmers that may lose their efficacy in the course of climate change effects predicted for East Africa. Therefore, the second objective of this study was to examine possible changes in rainfall amounts during the last 60 years and to provide reliable risk and probability statements of rainfall-induced events of agricultural importance to rain-fed farmers in the Nuba Mountains. Soil gaseous emissions of C (in form of CO2) and N (in form of NH3 and N2O) of two traditional and two intensified homegardens were determined with a portable dynamic closed chamber system. For C gaseous emission rates reached their peak at the onset of the rainy season (2,325 g CO2-C ha-1 h-1 in an intensified garden type) and for N during the rainy season (16 g NH3-N ha-1 h-1 and 11.3 g N2O-N ha-1 h-1, in a traditional garden type). Data indicated cumulative annual emissions of 5,893 kg CO2-C ha-1, 37 kg NH3-N ha-1, and 16 kg N2O-N ha-1. For the assessment of the long-term productivity of the two types of homegardens and the identification of pathways of substantial element losses, a C and nutrient budget approach was used. In three traditional and three intensified homegardens observation plots were selected. The following variables were quantified on each plot between June and December in 2010: soil amendments, irrigation, biomass removal, symbiotic N2 fixation, C fixation by photosynthesis, atmospheric wet and dry deposition, leaching and soil gaseous emissions. Annual balances for C and nutrients amounted to -21 kg C ha-1, -70 kg N ha-1, 9 kg P ha-1 and -117 kg K ha-1 in intensified homegardens and to -1,722 kg C ha-1, -167 kg N ha-1, -9 kg P ha-1 and -74 kg K ha-1 in traditional homegardens. For the analysis of rainfall data, the INSTAT+ software allowed to aggregate long-term daily rainfall records from the Kadugli and Rashad weather stations into daily, monthly and annual intervals and to calculate rainfall-induced events of agricultural importance. Subsequently, these calculated values and events were checked for possible monotonic trends by Mann-Kendall tests. Over the period from 1970 to 2009, annual rainfall did not change significantly for either station. However, during this period an increase of low rainfall events coinciding with a decline in the number of medium daily rainfall events was observed in Rashad. Furthermore, the availability of daily rainfall data enabled frequency and conditional probability calculations that showed either no statistically significant changes or trends resulting only in minor changes of probabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.