852 resultados para Multi-Equation Income Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The induction of late long-term potentiation (L-LTP) involves complex interactions among second-messenger cascades. To gain insights into these interactions, a mathematical model was developed for L-LTP induction in the CA1 region of the hippocampus. The differential equation-based model represents actions of protein kinase A (PKA), MAP kinase (MAPK), and CaM kinase II (CAMKII) in the vicinity of the synapse, and activation of transcription by CaM kinase IV (CAMKIV) and MAPK. L-LTP is represented by increases in a synaptic weight. Simulations suggest that steep, supralinear stimulus-response relationships between stimuli (e.g., elevations in [Ca(2+)]) and kinase activation are essential for translating brief stimuli into long-lasting gene activation and synaptic weight increases. Convergence of multiple kinase activities to induce L-LTP helps to generate a threshold whereby the amount of L-LTP varies steeply with the number of brief (tetanic) electrical stimuli. The model simulates tetanic, -burst, pairing-induced, and chemical L-LTP, as well as L-LTP due to synaptic tagging. The model also simulates inhibition of L-LTP by inhibition of MAPK, CAMKII, PKA, or CAMKIV. The model predicts results of experiments to delineate mechanisms underlying L-LTP induction and expression. For example, the cAMP antagonist RpcAMPs, which inhibits L-LTP induction, is predicted to inhibit ERK activation. The model also appears useful to clarify similarities and differences between hippocampal L-LTP and long-term synaptic strengthening in other systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When proposing primary control (changing the world to fit self)/secondary control (changing self to fit the world) theory, Weisz et al. (1984) argued for the importance of the “serenity to accept the things I cannot change, the courage to change the things I can” (p. 967), and the wisdom to choose the right control strategy that fits the context. Although the dual processes of control theory generated hundreds of empirical studies, most of them focused on the dichotomy of PC and SC, with none of these tapped into the critical concept: individuals’ ability to know when to use what. This project addressed this issue by using scenario questions to study the impact of situationally adaptive control strategies on youth well-being. To understand the antecedents of youths’ preference for PC or SC, we also connected PCSC theory with Dweck’s implicit theory about the changeability of the world. We hypothesized that youths’ belief about the world’s changeability impacts how difficult it was for them to choose situationally adaptive control orientation, which then impacts their well-being. This study included adolescents and emerging adults between the ages of 18 and 28 years (Mean = 20.87 years) from the US (n = 98), China (n = 100), and Switzerland (n = 103). Participants answered a questionnaire including a measure of implicit theories about the fixedness of the external world, a scenario-based measure of control orientation, and several measures of well-being. Preliminary analyses of the scenario-based control orientation measures showed striking cross-cultural similarity of preferred control responses: while for three of the six scenarios primary control was the predominately chosen control response in all cultures, for the other three scenarios secondary control was the predominately chosen response. This suggested that youths across cultures are aware that some situations call for primary control, while others demand secondary control. We considered the control strategy winning the majority of the votes to be the strategy that is situationally adaptive. The results of a multi-group structural equation mediation model with the extent of belief in a fixed world as independent variable, the difficulties of carrying out the respective adaptive versus non-adaptive control responses as two mediating variables and the latent well-being variable as dependent variable showed a cross-culturally similar pattern of effects: a belief in a fixed world was significantly related to higher difficulties in carrying out the normative as well as the non-normative control response, but only the difficulty of carrying out the normative control response (be it primary control in situations where primary control is normative or secondary control in situations where secondary control is normative) was significantly related to a lower reported well-being (while the difficulty of carrying out the non-normative response was unrelated to well-being). While previous research focused on cross-cultural differences on the choice of PC or SC, this study shed light on the universal necessity of applying the right kind of control to fit the situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comprehensive model of processes involved in femtosecond laser inscription and the subsequent structural material modification is developed. Different time scales of the pulse-plasma dynamics and thermo-mechanical relaxation allow for separate numerical treatments of these processes, while linking them by an energy transfer equation. The model is illustrated and analysed on examples of inscription in fused silica and the results are used to explain previous experimental observations. © 2007 Springer Science+Business Media, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colloid self-assembly under external control is a new route to fabrication of advanced materials with novel microstructures and appealing functionalities. The kinetic processes of colloidal self-assembly have attracted great interests also because they are similar to many atomic level kinetic processes of materials. In the past decades, rapid technological progresses have been achieved on producing shape-anisotropic, patchy, core-shell structured particles and particles with electric/magnetic charges/dipoles, which greatly enriched the self-assembled structures. Multi-phase carrier liquids offer new route to controlling colloidal self-assembly. Therefore, heterogeneity is the essential characteristics of colloid system, while so far there still lacks a model that is able to efficiently incorporate these possible heterogeneities. This thesis is mainly devoted to development of a model and computational study on the complex colloid system through a diffuse-interface field approach (DIFA), recently developed by Wang et al. This meso-scale model is able to describe arbitrary particle shape and arbitrary charge/dipole distribution on the surface or body of particles. Within the framework of DIFA, a Gibbs-Duhem-type formula is introduced to treat Laplace pressure in multi-liquid-phase colloidal system and it obeys Young-Laplace equation. The model is thus capable to quantitatively study important capillarity related phenomena. Extensive computer simulations are performed to study the fundamental behavior of heterogeneous colloidal system. The role of Laplace pressure is revealed in determining the mechanical equilibrium of shape-anisotropic particles at fluid interfaces. In particular, it is found that the Laplace pressure plays a critical role in maintaining the stability of capillary bridges between close particles, which sheds light on a novel route to in situ firming compact but fragile colloidal microstructures via capillary bridges. Simulation results also show that competition between like-charge repulsion, dipole-dipole interaction and Brownian motion dictates the degree of aggregation of heterogeneously charged particles. Assembly and alignment of particles with magnetic dipoles under external field is studied. Finally, extended studies on the role of dipole-dipole interaction are performed for ferromagnetic and ferroelectric domain phenomena. The results reveal that the internal field generated by dipoles competes with external field to determine the dipole-domain evolution in ferroic materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sugarcane bagasse was pretreated with diluted sulfuric acid to obtain sugarcane bagasse hemicellulosic hydrolysate (SBHH). Experiments were conducted in laboratory and semi-pilot reactors to optimize the xylose recovery and to reduce the generation of sugar degradation products, as furfural and 5-hydroxy-methylfurfural (HMF). The hydrolysis scale-up procedure was based on the H-Factor, that combines temperature and residence time and employs the Arrhenius equation to model the sulfuric acid concentration (100 mg(acid)/g(dm)) and activation energy (109 kJ/mol). This procedure allowed the mathematical estimation of the results through simulation of the conditions prevailing in the reactors with different designs. The SBHH obtained from different reactors but under the same H-Factor of 5.45 +/- 0.15 reached similar xylose yield (approximately 74%) and low concentration of sugar degradation products, as furfural (0.082 g/L) and HMF (0.0071 g/L). Also, the highest lignin degradation products (phenolic compounds) were rho-coumarilic acid (0.15 g/L) followed by ferulic acid (0.12 g/L) and gallic acid (0.035 g/L). The highest concentration of ions referred to S (3433.6 mg/L), Fe (554.4 mg/L), K (103.9 mg/L), The H-Factor could be used without dramatically altering the xylose and HMF/furfural levels. Therefore, we could assume that H-Factor was directly useful in the scale-up of the hemicellulosic hydrolysate production. (C) 2009 Published by Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generalised ladder operator is used to construct the conserved operators for any one-dimensional lattice model derived from the Yang-Baxter equation. As an example, the low order conserved operators for the XYh model are calculated explicitly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is concern that Pacific island economies dependent on remittances of migrants will endure foreign exchange shortages and declining Living standards as remittance levels drop due to lower migration rates and the belief that migrants' willingness to remit decreases over time. The empirical validity of the remittance-decay hypothesis has never been tested. From survey data on Tongan and Western Samoan migrants in Sydney, this paper estimates remittance functions using Tobit regression analysis. It is found that the remittance-decay hypothesis has no empirical validity and migrants are motivated by factors other than altruistic family support, including asset accumulation and investment back home. (C) 1997 Elsevier Science Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-frequency beach water table fluctuations due to wave run-up and rundown have been observed in the field [Waddell, 1976]. Such fluctuations affect the infiltration/exfiltration process across the beach face and the interstitial oxygenation process in the beach ecosystem. Accurate representation of high-frequency water table fluctuations is of importance in the modeling of (1) the interaction between seawater and groundwater, more important, the effects on swash sediment transport and (2) the biological activities in the beach ecosystem. Capillarity effects provide a mechanism for high-frequency water table fluctuations. Previous modeling approaches adopted the assumption of saturated flow only and failed to predict the propagation of high-frequency fluctuations in the aquifer. In this paper we develop a modified kinematic boundary condition (kbc) for the water table which incorporates capillarity effects. The application of this kbc in a boundary element model enables the simulation of high-frequency water table fluctuations due to wave run-up. Numerical tests were carried out for a rectangular domain with small-amplitude oscillations; the behavior of water table responses was found to be similar to that predicted by an analytical solution based on the one-dimensional Boussinesq equation. The model was also applied to simulate the water table response to wave run-up on a doping beach. The results showed similar features of water table fluctuations observed in the field. In particular, these fluctuations are standing wave-like with the amplitude becoming increasingly damped inland. We conclude that the modified kbc presented here is a reasonable approximation of capillarity effects on beach water table fluctuations. However, further model validation is necessary before the model can confidently be used to simulate high-frequency water table fluctuations due to wave run-up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A semi-empirical linear equation has been developed to optimise the amount of maltodextrin additive (DE 6) required to successfully spray dry a sugar-rich product on the basis of its composition. Based on spray drying experiments, drying index values for individual sugars (sucrose, glucose, frutose) and citric acid were determined, and us;ng these index values an equation for model mixtures of these components was established. This equation has been tested with two sugar-rich natural products, pineapple juice and honey. The relationship was found to be valid for these products.