980 resultados para real option theory
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call
Resumo:
It is very well known that the first succesful valuation of a stock option was done by solving a deterministic partial differential equation (PDE) of the parabolic type with some complementary conditions specific for the option. In this approach, the randomness in the option value process is eliminated through a no-arbitrage argument. An alternative approach is to construct a replicating portfolio for the option. From this viewpoint the payoff function for the option is a random process which, under a new probabilistic measure, turns out to be of a special type, a martingale. Accordingly, the value of the replicating portfolio (equivalently, of the option) is calculated as an expectation, with respect to this new measure, of the discounted value of the payoff function. Since the expectation is, by definition, an integral, its calculation can be made simpler by resorting to powerful methods already available in the theory of analytic functions. In this paper we use precisely two of those techniques to find the well-known value of a European call
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
In his version of the theory of multicomponent systems, Friedman used the analogy which exists between the virial expansion for the osmotic pressure obtained from the McMillan-Mayer (MM) theory of solutions in the grand canonical ensemble and the virial expansion for the pressure of a real gas. For the calculation of the thermodynamic properties of the solution, Friedman proposed a definition for the"excess free energy" that is a reminder of the ancient idea for the"osmotic work". However, the precise meaning to be attached to his free energy is, within other reasons, not well defined because in osmotic equilibrium the solution is not a closed system and for a given process the total amount of solvent in the solution varies. In this paper, an analysis based on thermodynamics is presented in order to obtain the exact and precise definition for Friedman"s excess free energy and its use in the comparison with the experimental data.
Resumo:
This thesis is done as a complementary part for the active magnet bearing (AMB) control software development project in Lappeenranta University of Technology. The main focus of the thesis is to examine an idea of a real-time operating system (RTOS) framework that operates in a dedicated digital signal processor (DSP) environment. General use real-time operating systems do not necessarily provide sufficient platform for periodic control algorithm utilisation. In addition, application program interfaces found in real-time operating systems are commonly non-existent or provided as chip-support libraries, thus hindering platform independent software development. Hence, two divergent real-time operating systems and additional periodic extension software with the framework design are examined to find solutions for the research problems. The research is discharged by; tracing the selected real-time operating system, formulating requirements for the system, and designing the real-time operating system framework (OSFW). The OSFW is formed by programming the framework and conjoining the outcome with the RTOS and the periodic extension. The system is tested and functionality of the software is evaluated in theoretical context of the Rate Monotonic Scheduling (RMS) theory. The performance of the OSFW and substance of the approach are discussed in contrast to the research theme. The findings of the thesis demonstrates that the forged real-time operating system framework is a viable groundwork solution for periodic control applications.
Resumo:
JÄKÄLA-algoritmi (Jatkuvan Äänitehojakautuman algoritmi Käytävien Äänikenttien LAskentaan) ja sen NUMO- ja APPRO-laskentayhtälöt perustuvat käytävällä olevan todellisen äänilähteen kuvalähteiden symmetriaan. NUMO on algoritmin numeerisen ratkaisun ja APPRO likiarvoratkaisun laskentayhtälö. Algoritmia johdettaessa oletettiin, että absorptiomateriaali oli jakautunut tasaisesti käytävän ääntä heijastaville pinnoille. Suorakaiteen muotoisen käytävän kuvalähdetason muunto jatkuvaksi äänitehojakautumaksi sisältää kolme muokkausvaihetta. Aluksi suorakaiteen kuvalähdetaso muunnetaan neliön muotoiseksi. Seuraavaksi neliön muotoisen kuvalähdetason samanarvoiset kuvalähteet siirretään koordinaattiakselille diskreetiksi kuvalähdejonoksi. Lopuksi kuvalähdejono muunnetaan jatkuvaksi äänitehojakautumaksi, jolloin käytävän vastaanottopisteen äänenpainetaso voidaan laskea integroimalla jatkuvan äänitehojakautuman yli. JÄKÄLA-algoritmin validiteetin toteamiseksi käytettiin testattua kaupallista AKURI-ohjelmaa. AKURI-ohjelma antoi myös hyvän käsityksen siitä, miten NUMO- ja APPRO-yhtälöillä lasketut arvot mahdollisesti eroavat todellisilla käytävillä mitatuista arvoista. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöitä testattiin myös vertaamalla niiden antamia tuloksia kolmen erityyppisen käytävän äänenpainetasomittauksiin. Tässä tutkimuksessa on osoitettu, että akustisen kuvateorian pohjalta on mahdollista johtaa laskenta-algoritmi, jota voidaan soveltaa pitkien käytävien äänikenttien pika-arvioinnissa paikan päällä. Sekä teoreettinen laskenta että käytännön äänenpainetasomittaukset todellisilla käytävillä osoittivat, että JÄKÄLA-algoritmin yhtälöiden ennustustarkkuus oli erinomainen ideaalikäytävillä ja hyvä niillä todellisilla käytävillä, joilla ei ollut ääntä heijastavia rakenteita. NUMO- ja APPRO-yhtälöt näyttäisivät toimivan hyvin käytävillä, joiden poikkileikkaus oli lähes neliön muotoinen ja joissa pintojen suurin absorptiokerroin oli korkeintaan kymmenen kertaa pienintä absorptiokerrointa suurempi. NUMO- ja APPRO-yhtälöiden suurin puute on, etteivät ne ota huomioon pintojen erilaisia absorptiokertoimia eivätkä esineistä heijastuvia ääniä. NUMO- ja APPRO- laskentayhtälöt poikkesivat mitatuista arvoista eniten käytävillä, joilla kahden vastakkaisen pinnan absorptiokerroin oli hyvin suuri ja toisen pintaparin hyvin pieni, ja käytävillä, joissa oli massiivisia, ääntä heijastavia pilareita ja palkkeja. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöt antoivat tutkituilla käytävillä kuitenkin selvästi tarkempia arvoja kuin Kuttruffin likiarvoyhtälö ja tilastollisen huoneakustiikan perusyhtälö. JÄKÄLA-algoritmin laskentatarkkuutta on testattu vain neljällä todellisella käytävällä. Algoritmin kehittämiseksi tulisi jatkossa käytävän vastakkaisia pintoja ja niiden absorptiokertoimia käsitellä laskennassa pareittain. Algoritmin validiteetin varmistamiseksi on mittauksia tehtävä lisää käytävillä, joiden absorptiomateriaalien jakautumat poikkeavat toisistaan.
Resumo:
Organismic-centered Darwinism, in order to use direct phenotypes to measure natural selection's effect, necessitates genome's harmony and uniform coherence plus large population sizes. However, modern gene-centered Darwinism has found new interpretations to data that speak of genomic incoherence and disharmony. As a result of these two conflicting positions a conceptual crisis in Biology has arisen. My position is that the presence of small, even pocket-size, demes is instrumental in generating divergence and phenotypic crisis. Moreover, the presence of parasitic genomes as in acanthocephalan worms, which even manipulate suicidal behavior in their hosts; segregation distorters that change meiosis and Mendelian ratios; selfish genes and selfish whole chromosomes, such as the case of B-chromosomes in grasshoppers; P-elements in Drosophila; driving Y-chromosomes that manipulate sex ratios making males more frequent, as in Hamilton's X-linked drive; male strategists and outlaw genes, are eloquent examples of the presence of real conflicting genomes and of a non-uniform phenotypic coherence and genome harmony. Thus, we are proposing that overall incoherence and disharmony generate disorder but also more biodiversity and creativeness. Finally, if genes can manipulate natural selection, they can multiply mutations or undesirable characteristics and even lethal or detrimental ones, hence the accumulation of genetic loads. Outlaw genes can change what is adaptively convenient even in the direction of the trait that is away from the optimum. The optimum can be "negotiated" among the variants, not only because pleiotropic effects demand it, but also, in some cases, because selfish, outlaw, P-elements or extended phenotypic manipulation require it. With organismic Darwinism the genome in the population and in the individual was thought to act harmoniously without conflicts, and genotypes were thought to march towards greater adaptability. Modern Darwinism has a gene-centered vision in which genes, as natural selection's objects can move in dissonance in the direction which benefits their multiplication. Thus, we have greater opportunities for genomes in permanent conflict.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
In this paper we analyse the recent evolution and determinants of real wages in Mexicos manufacturing sector, using theories based on the assumption of imperfect competition both in the product and in the labour markets, especially wage-bargain theory, insider-outsider and mark-up models. We show evidence that the Mexican labour market does not behave as a traditional competitive market. The proposed explanation for this fact is that some workers benefit from advantages when compared with others, so that they can get a greater share of the proceedings of the productive process. Also, we find that changes in the degree of competition in the market for output influence the behaviour of real wages.
Resumo:
This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.
Resumo:
This paper analyses reasons of the instability of the world monetary system. The author considers this problem from historical and contemporary perspectives. According to presented point of view banknotes and electronic money which replaced gold and silver coins in popular circulation are the most important reason of the instability. There are also proven positive and negative consequences of money instability. Reforms of the world monetary system need agreement within the global collective hegemony of state-powers and transnational corporations.
Resumo:
Textbook theory ignores capital flows: trade determines exchange rates and specialisation. Approaches taking the effects of capital movements adequately into account are needed, and a new theory of economic policy including measures to protect the real economy from external volatility. Equilibrating textbook mechanisms cannot work unless trade-caused surpluses and deficits set exchange rates. To allow orthodox trade theory to work one must hinder capital flows from destroying its very basis, which the IMF and wrong regulatory decisions have done, penalising production and trade. A new, real economy based theory is proposed, a Neoclassical agenda of controlling capital flows and speculation.
Resumo:
This thesis examines the impact of the Soviet Union's collapse on the Russian Symbolic as represented through popular cinema of the post-Soviet period. The disintegration of the USSR in 1991 became one of the most traumatic experiences for many Russian people. The trauma of the collapse of the Soviet Union penetrated the everyday reality of the Russian Symbolic, leaving the traces-symptoms in different cultural fonns like literature, arts, television and cinema. Because popular culture usually reacts very quickly to any social, political and economical shifts in society, it is an excellent barometer for deeper changes in society. Focusing on postSoviet popular cinema, this thesis analyzes the symptoms of cultural and individual trauma occasioned by the momentous changes of the 1990's. This study is grounded in post-analytic theory of Jacques Lacan and its interpretation by Slavoj Zizek, which emphases the traumatic encounter with the Real as a "hard core" of our reality. According to this paradigm, a new chain of signifiers is structured around the traumatic breach in the Symbolic, initiating a process of fantasy construction to deal with consequences of trauma and, thus, to support our Symbolic order. This thesis examines three major fantasy constructions - drinking, traveling to a "happy land" and family reunion and money - in popular films by Alexander Rogozhkin, Yurij Mamin, Georgij Shengelia, Dmitrij Astrakhan, Valerij Todorovskij, Alexej Balabanov, Sergej Bodrov Jr. and Petr Buslov. According to Zizek, enjoyment underlies any fantasy constructions, and that is why after the intrusion of the Real every individual and culture should go through the process of fantasizing about some substitutes which can help to minimize the traumatic effect and which can lead to a partial enjoyment. By analyzing the fantasies about drinking, "happy land", reconstruction of the family bonds and money in Russian popular cinema since 1991, this thesis demonstrates how the traumatic engagement with the Real affected the everyday lives of Russian people, and how individuals tried to fill the gap, the lack, in the post-Soviet Symbolic and "return" the lost feeling of unity and plenitude.