919 resultados para one-boson-exchange models
Resumo:
A business model is a structure frame of an organization that can bring significant benefits and competitive advantage when structured properly. The aim of this paper was to observe and describe development of business models’ and identify factors and elements of a business model that are in a key role from the perspective of an organizational sustainability. One is striving to bring out in this thesis how should truly sustainable business model look like and what are main characteristics of it. Additionally, some recommendations that could be helpful in order to build sustainable and balanced business model in a company are presented in this work. The meaning was to make theoretical and in some extent practical acquaintance with such new business models as open business model and sustainable business model. Long-term sustainability achievement in a company was in a centric role and used as a main criteria when constructing sustainable business model structure. The main research question in this study aims to answer: What a firm should consider in order to develop profitable and sustainable business model? This study is qualitative in nature and it was conducted using content analyze as a main method of this research. The perspective of the target data in this study is an outlook of its producers of how sustainability is reached in an organization throw business model and which practices are important and has to be taken into account. The material was gathered mainly from secondary sources and the theoretical framework was outright built based on secondary data. The secondary data that have been mostly dissertations, academic writings, cases, academic journals and academic books have been analyzed from the point of view of sustainability perspective. As a result it became evident that a structure of a business model and its implementation along with a strategy is often what leads companies to success. However, for the most part, overall business environment decides and delimits how the most optimal business model should be constructed in order to be effective and sustainable. The evaluation of key factors and elements in business model leading organization to sustainability should be examined throw triple bottom line perspective, where key dimensions are environmental, social and economic. It was concluded that dimensions should be evaluated as equal in order to attain total long lasting sustainability, contradicting traditional perspective in business where profit production is seen as only main goal of a business.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
This thesis introduces heat demand forecasting models which are generated by using data mining algorithms. The forecast spans one full day and this forecast can be used in regulating heat consumption of buildings. For training the data mining models, two years of heat consumption data from a case building and weather measurement data from Finnish Meteorological Institute are used. The thesis utilizes Microsoft SQL Server Analysis Services data mining tools in generating the data mining models and CRISP-DM process framework to implement the research. Results show that the built models can predict heat demand at best with mean average percentage errors of 3.8% for 24-h profile and 5.9% for full day. A deployment model for integrating the generated data mining models into an existing building energy management system is also discussed.
Resumo:
Return and volatility dynamics in financial markets across the world have recently become important for the purpose of asset pricing, portfolio allocation and risk management. However, volatility, which come about as a result of the actions of market participants can help adapt to different situations and perform when it really matters. With recent development and liberalization among financial markets in emerging and frontier markets, the need for how the equity and foreign exchange markets interact and the extent to which return and volatility spillover are spread across countries is of importance to investors and policy makers at large. Financial markets in Africa have received attention leading to investors diversifying into them in times of crisis and contagion effects in developed countries. Regardless of the benefits these markets may offer, investors must be wary of issues such as thin trading, volatility that exists in the equity and currency markets and its related fluctuations. The study employs a VAR-GARCH BEKK model to study the return and volatility dynamics between the stock and foreign exchange sectors and among the equity markets of Egypt, Kenya, Nigeria, South Africa and Tunisia. The main findings suggest a higher dependence of own return in the stock markets and a one way return spillover from the currencies to the equity markets except for South Africa which has a weaker interrelation among the two markets. There is a relatively limited integration among the equity markets. Return and volatility spillover is mostly uni-directional except for a bi-directional relationship between the equity markets of Egypt and Tunisia. The study implication still proves a benefit for portfolio managers diversifying in these African equity markets, since they are independent of each other and may not be highly affected by the influx of negative news from elsewhere. However, there is the need to be wary of return and volatility spillover between the equity and currency markets, hence devising better hedging strategies to curb them.
Resumo:
This Master’s Thesis analyses the effectiveness of different hedging models on BRICS (Brazil, Russia, India, China, and South Africa) countries. Hedging performance is examined by comparing two different dynamic hedging models to conventional OLS regression based model. The dynamic hedging models being employed are Constant Conditional Correlation (CCC) GARCH(1,1) and Dynamic Conditional Correlation (DCC) GARCH(1,1) with Student’s t-distribution. In order to capture the period of both Great Moderation and the latest financial crisis, the sample period extends from 2003 to 2014. To determine whether dynamic models outperform the conventional one, the reduction of portfolio variance for in-sample data with contemporaneous hedge ratios is first determined and then the holding period of the portfolios is extended to one and two days. In addition, the accuracy of hedge ratio forecasts is examined on the basis of out-of-sample variance reduction. The results are mixed and suggest that dynamic hedging models may not provide enough benefits to justify harder estimation and daily portfolio adjustment. In this sense, the results are consistent with the existing literature.
Resumo:
Fluctuating commodity prices, foreign exchange rates and interest rates are causing changes in cash flows, market value and the companies’ profit. Most of the commodities are quoted in US dollar. Companies with non-dollar accounting face a double risk in the form of the commodity price risk and foreign exchange risk. The objective of this Master’s thesis is to find out how companies under commodity should manage foreign exchange exposure. The theoretical literature is based on foreign exchange risk, commodity risk and foreign exchange exposure management. The empirical research is done by using constructive modelling of a case company in the oil industry. The exposure is model with foreign exchange net cash flow and net working capital. First, the factors affecting foreign exchange exposure in case company are analyzed, then a model of foreign exchange exposure is created. Finally, the models are compared and the most suitable method is defined. According to the literature, foreign exchange exposure is the foreign exchange net cash flow. However, the results of the study show that foreign exchange risk can be managed also with net working capital. When the purchases, sales and storage are under foreign exchange risk, the best way to manage foreign exchange exposure is with combined net cash flow and net working capital method. The foreign exchange risk policy of the company defines the appropriate way to manage foreign exchange risk.
Resumo:
Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.
Resumo:
This paper presents a methodology for calculating the industrial equilibrium exchange rate, which is defined as the one enabling exporters of state-of-the-art manufactured goods to be competitive abroad. The first section highlights the causes and problems of overvalued exchange rates, particularly the Dutch disease issue, which is neutralized when the exchange rate strikes the industrial equilibrium level. This level is defined by the ratio between the unit labor cost in the country under consideration and in competing countries. Finally, the evolution of this exchange rate in the Brazilian economy is estimated.
Resumo:
The debate on the link between trade rules and rules on exchange rates is raising the attention of experts on international trade law and economics. The main purpose of this paper is to analyze the impacts of exchange rate misalignments on tariffs as applied by the WTO - World Trade Organization. It is divided into five sections: the first one explains the methodology used to determine exchange rate misalignments and also presents its results for Brazil, U.S. and China; the second summarizes the methodology applied to calculate the impacts of exchange rate misalignments on the level of tariff protection through an exercise of "misalignment tariffication"; the third examines the effects of exchange rate variations on tariffs and their consequences for the multilateral trading system; the fourth one creates a methodology to estimate exchange rates against a currency of the World and a proposal to deal with persistent and significant misalignments related to trade rules. The conclusions are present in the last section.
Resumo:
Increased rotational speed brings many advantages to an electric motor. One of the benefits is that when the desired power is generated at increased rotational speed, the torque demanded from the rotor decreases linearly, and as a consequence, a motor of smaller size can be used. Using a rotor with high rotational speed in a system with mechanical bearings can, however, create undesirable vibrations, and therefore active magnetic bearings (AMBs) are often considered a good option for the main bearings, as the rotor then has no mechanical contact with other parts of the system but levitates on the magnetic forces. On the other hand, such systems can experience overloading or a sudden shutdown of the electrical system, whereupon the magnetic field becomes extinct, and as a result of rotor delevitation, mechanical contact occurs. To manage such nonstandard operations, AMB-systems require mechanical touchdown bearings with an oversized bore diameter. The need for touchdown bearings seems to be one of the barriers preventing greater adoption of AMB technology, because in the event of an uncontrolled touchdown, failure may occur, for example, in the bearing’s cage or balls, or in the rotor. This dissertation consists of two parts: First, touchdown bearing misalignment in the contact event is studied. It is found that misalignment increases the likelihood of a potentially damaging whirling motion of the rotor. A model for analysis of the stresses occurring in the rotor is proposed. In the studies of misalignment and stresses, a flexible rotor using a finite element approach is applied. Simplified models of cageless and caged bearings are used for the description of touchdown bearings. The results indicate that an increase in misalignment can have a direct influence on the bending and shear stresses occurring in the rotor during the contact event. Thus, it was concluded that analysis of stresses arising in the contact event is essential to guarantee appropriate system dimensioning for possible contact events with misaligned touchdown bearings. One of the conclusions drawn from the first part of the study is that knowledge of the forces affecting the balls and cage of the touchdown bearings can enable a more reliable estimation of the service life of the bearing. Therefore, the second part of the dissertation investigates the forces occurring in the cage and balls of touchdown bearings and introduces two detailed models of touchdown bearings in which all bearing parts are modelled as independent bodies. Two multibody-based two-dimensional models of touchdown bearings are introduced for dynamic analysis of the contact event. All parts of the bearings are modelled with geometrical surfaces, and the bodies interact with each other through elastic contact forces. To assist in identification of the forces affecting the balls and cage in the contact event, the first model describes a touchdown bearing without a cage, and the second model describes a touchdown bearing with a cage. The introduced models are compared with the simplified models used in the first part of the dissertation through parametric study. Damages to the rotor, cage and balls are some of the main reasons for failures of AMB-systems. The stresses in the rotor in the contact event are defined in this work. Furthermore, the forces affecting key bodies of the bearings, cage and balls can be studied using the models of touchdown bearings introduced in this dissertation. Knowledge obtained from the introduced models is valuable since it can enable an optimum structure for a rotor and touchdown bearings to be designed.
Resumo:
Although alcohol problems and alcohol consumption are related, consumption does not fully account for differences in vulnerability to alcohol problems. Therefore, other factors should account for these differences. Based on previous research, it was hypothesized that risky drinking behaviours, illicit and prescription drug use, affect and sex differences would account for differences in vulnerability to alcohol problems while statistically controlling for overall alcohol consumption. Four models were developed that were intended to test the predictive ability of these factors, three of which tested the predictor sets separately and a fourth which tested them in a combined model. In addition, two distinct criterion variables were regressed on the predictors. One was a measure of the frequency that participants experienced negative consequences that they attributed to their drinking and the other was a measure of the extent to which participants perceived themselves to be problem drinkers. Each of the models was tested on four samples from different populations, including fIrst year university students, university students in their graduating year, a clinical sample of people in treatment for addiction, and a community sample of young adults randomly selected from the general population. Overall, support was found for each of the models and each of the predictors in accounting for differences in vulnerability to alcohol problems. In particular, the frequency with which people become intoxicated, frequency of illicit drug use and high levels of negative affect were strong and consistent predictors of vulnerability to alcohol problems across samples and criterion variables. With the exception of the clinical sample, the combined models predicted vulnerability to negative consequences better than vulnerability to problem drinker status. Among the clinical and community samples the combined model predicted problem drinker status better than in the student samples.
Resumo:
The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.
Hydraulic and fluvial geomorphological models for a bedrock channel reach of the Twenty Mile Creek /
Resumo:
Bedrock channels have been considered challenging geomorphic settings for the application of numerical models. Bedrock fluvial systems exhibit boundaries that are typically less mobile than alluvial systems, yet they are still dynamic systems with a high degree of spatial and temporal variability. To understand the variability of fluvial systems, numerical models have been developed to quantify flow magnitudes and patterns as the driving force for geomorphic change. Two types of numerical model were assessed for their efficacy in examining the bedrock channel system consisting of a high gradient portion of the Twenty Mile Creek in the Niagara Region of Ontario, Canada. A one-dimensional (1-D) flow model that utilizes energy equations, HEC RAS, was used to determine velocity distributions through the study reach for the mean annual flood (MAF), the 100-year return flood and the 1,000-year return flood. A two-dimensional (2-D) flow model that makes use of Navier-Stokes equations, RMA2, was created with the same objectives. The 2-D modeling effort was not successful due to the spatial complexity of the system (high slope and high variance). The successful 1 -D model runs were further extended using very high resolution geospatial interpolations inherent to the HEC RAS extension, HEC geoRAS. The modeled velocity data then formed the basis for the creation of a geomorphological analysis that focused upon large particles (boulders) and the forces needed to mobilize them. Several existing boulders were examined by collecting detailed measurements to derive three-dimensional physical models for the application of fluid and solid mechanics to predict movement in the study reach. An imaginary unit cuboid (1 metre by 1 metre by 1 metre) boulder was also envisioned to determine the general propensity for the movement of such a boulder through the bedrock system. The efforts and findings of this study provide a standardized means for the assessment of large particle movement in a bedrock fluvial system. Further efforts may expand upon this standardization by modeling differing boulder configurations (platy boulders, etc.) at a high level of resolution.
Resumo:
Exchange reactions between molecular complexes and excess acid
or base are well known and have been extensively surveyed in the
literature(l). Since the exchange mechanism will, in some way
involve the breaking of the labile donor-acceptor bond, it follows
that a discussion of the factors relating to bonding in molecular complexes
will be relevant.
In general, a strong Lewis base and a strong Lewis acid form a
stable adduct provided that certain stereochemical requirements are
met.
A strong Lewis base has the following characteristics (1),(2)
(i) high electron density at the donor site.
(ii) a non-bonded electron pair which has a low ionization potential
(iii) electron donating substituents at the donor atom site.
(iv) facile approach of the site of the Lewis base to the
acceptor site as dictated by the steric hindrance of the
substituents.
Examples of typical Lewis bases are ethers, nitriles, ketones,
alcohols, amines and phosphines.
For a strong Lewis acid, the following properties are important:(
i) low electron density at the acceptor site.
(ii) electron withdrawing substituents. (iii) substituents which do not interfere with the close
approach of the Lewis base.
(iv) availability of a vacant orbital capable of accepting
the lone electron pair of the donor atom.
Examples of Lewis acids are the group III and IV halides such
(M=B, AI, Ga, In) and MX4 - (M=Si, Ge, Sn, Pb).
The relative bond strengths of molecular complexes have been
investigated by:-
(i)
(ii)
(iii)
(iv)
(v]
(vi)
dipole moment measurements (3).
shifts of the carbonyl peaks in the IIIR. (4) ,(5), (6) ..
NMR chemical shift data (4),(7),(8),(9).
D.V. and visible spectrophotometric shifts (10),(11).
equilibrium constant data (12), (13).
heats of dissociation and heats of reactions (l~),
(16), (17), (18), (19).
Many experiments have bben carried out on boron trihalides in
order to determine their relative acid strengths. Using pyridine,
nitrobenzene, acetonitrile and trimethylamine as reference Lewis
bases, it was found that the acid strength varied in order:RBx3 >
BC1
3 >BF 3
• For the acetonitrile-boron trihalide and trimethylamine
boron trihalide complexes in nitrobenzene, an-NMR study (7) showed
that the shift to lower field was. greatest for the BB~3 adduct ~n~
smallest for the BF 3 which is in agreement with the acid strengths. If electronegativities of the substituents were the only
important effect, and since c~ Br ,one would expect
the electron density at the boron nucleus to vary as BF3
Resumo:
Two groups of rainbow trout were acclimated to 20 , 100 , and 18 o C. Plasma sodium, potassium, and chloride levels were determined for both. One group was employed in the estimation of branchial and renal (Na+-K+)-stimulated, (HC0 3-)-stimulated, and CMg++)-dependent ATPase activities, while the other was used in the measurement of carbonic anhydrase activity in the blood, gill and kidney. Assays were conducted using two incubation temperature schemes. One provided for incubation of all preparations at a common temperature of 2S oC, a value equivalent to the upper incipient lethal level for this species. In the other procedure the preparations were incubated at the appropriate acclimation temperature of the sampled fish. Trout were able to maintain plasma sodium and chloride levels essentially constant over the temperature range employed. The different incubation temperature protocols produced different levels of activity, and, in some cases, contrary trends with respect to acclimation temperature. This information was discussed in relation to previous work on gill and kidney. The standing-gradient flow hypothesis was discussed with reference to the structure of the chloride cell, known thermallyinduced changes in ion uptake, and the enzyme activities obtained in this study. Modifications of the model of gill lon uptake suggested by Maetz (1971) were proposed; high and low temperature models resulting. In short, ion transport at the gill at low temperatures appears to involve sodium and chloride 2 uptake by heteroionic exchange mechanisms working in association w.lth ca.rbonlc anhydrase. G.l ll ( Na + -K + ) -ATPase and erythrocyte carbonic anhydrase seem to provide the supplemental uptake required at higher temperatures. It appears that the kidney is prominent in ion transport at low temperatures while the gill is more important at high temperatures. 3 Linear regression analyses involving weight, plasma ion levels, and enzyme activities indicated several trends, the most significant being the interrelationship observed between plasma sodium and chloride. This, and other data obtained in the study was considered in light of the theory that a link exists between plasma sodium and chloride regulatory mechanisms.