927 resultados para distributions to shareholders


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse est une collection de trois articles en macroéconomie et finances publiques. Elle développe des modèles d'Equilibre Général Dynamique et Stochastique pour analyser les implications macroéconomiques des politiques d'imposition des entreprises en présence de marchés financiers imparfaits. Le premier chapitre analyse les mécanismes de transmission à l'économie, des effets d'un ré-échelonnement de l'impôt sur le profit des entreprises. Dans une économie constituée d'un gouvernement, d'une firme représentative et d'un ménage représentatif, j'élabore un théorème de l'équivalence ricardienne avec l'impôt sur le profit des entreprises. Plus particulièrement, j'établis que si les marchés financiers sont parfaits, un ré-échelonnement de l'impôt sur le profit des entreprises qui ne change pas la valeur présente de l'impôt total auquel l'entreprise est assujettie sur toute sa durée de vie n'a aucun effet réel sur l'économie si l'état utilise un impôt forfaitaire. Ensuite, en présence de marchés financiers imparfaits, je montre qu'une une baisse temporaire de l'impôt forfaitaire sur le profit des entreprises stimule l'investissement parce qu'il réduit temporairement le coût marginal de l'investissement. Enfin, mes résultats indiquent que si l'impôt est proportionnel au profit des entreprises, l'anticipation de taxes élevées dans le futur réduit le rendement espéré de l'investissement et atténue la stimulation de l'investissement engendrée par la réduction d'impôt. Le deuxième chapitre est écrit en collaboration avec Rui Castro. Dans cet article, nous avons quantifié les effets sur les décisions individuelles d'investis-sement et de production des entreprises ainsi que sur les agrégats macroéconomiques, d'une baisse temporaire de l'impôt sur le profit des entreprises en présence de marchés financiers imparfaits. Dans un modèle où les entreprises sont sujettes à des chocs de productivité idiosyncratiques, nous avons d'abord établi que le rationnement de crédit affecte plus les petites (jeunes) entreprises que les grandes entreprises. Pour des entreprises de même taille, les entreprises les plus productives sont celles qui souffrent le plus du manque de liquidité résultant des imperfections du marché financier. Ensuite, nous montré que pour une baisse de 1 dollar du revenu de l'impôt, l'investissement et la production augmentent respectivement de 26 et 3,5 centimes. L'effet cumulatif indique une augmentation de l'investissement et de la production agrégés respectivement de 4,6 et 7,2 centimes. Au niveau individuel, nos résultats indiquent que la politique stimule l'investissement des petites entreprises, initialement en manque de liquidité, alors qu'elle réduit l'investissement des grandes entreprises, initialement non contraintes. Le troisième chapitre est consacré à l'analyse des effets de la réforme de l'imposition des revenus d'entreprise proposée par le Trésor américain en 1992. La proposition de réforme recommande l'élimination des impôts sur les dividendes et les gains en capital et l'imposition d'une seule taxe sur le revenu des entreprises. Pour ce faire, j'ai eu recours à un modèle dynamique stochastique d'équilibre général avec marchés financiers imparfaits dans lequel les entreprises sont sujettes à des chocs idiosyncratiques de productivité. Les résultats indiquent que l'abolition des impôts sur les dividendes et les gains en capital réduisent les distorsions dans les choix d'investissement des entreprises, stimule l'investissement et entraîne une meilleure allocation du capital. Mais pour être financièrement soutenable, la réforme nécessite un relèvement du taux de l'impôt sur le profit des entreprises de 34\% à 42\%. Cette hausse du taux d'imposition décourage l'accumulation du capital. En somme, la réforme engendre une baisse de l'accumulation du capital et de la production respectivement de 8\% et 1\%. Néanmoins, elle améliore l'allocation du capital de 20\%, engendrant des gains de productivité de 1.41\% et une modeste augmentation du bien être des consommateurs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse est une collection de trois articles en macroéconomie et finances publiques. Elle développe des modèles d'Equilibre Général Dynamique et Stochastique pour analyser les implications macroéconomiques des politiques d'imposition des entreprises en présence de marchés financiers imparfaits. Le premier chapitre analyse les mécanismes de transmission à l'économie, des effets d'un ré-échelonnement de l'impôt sur le profit des entreprises. Dans une économie constituée d'un gouvernement, d'une firme représentative et d'un ménage représentatif, j'élabore un théorème de l'équivalence ricardienne avec l'impôt sur le profit des entreprises. Plus particulièrement, j'établis que si les marchés financiers sont parfaits, un ré-échelonnement de l'impôt sur le profit des entreprises qui ne change pas la valeur présente de l'impôt total auquel l'entreprise est assujettie sur toute sa durée de vie n'a aucun effet réel sur l'économie si l'état utilise un impôt forfaitaire. Ensuite, en présence de marchés financiers imparfaits, je montre qu'une une baisse temporaire de l'impôt forfaitaire sur le profit des entreprises stimule l'investissement parce qu'il réduit temporairement le coût marginal de l'investissement. Enfin, mes résultats indiquent que si l'impôt est proportionnel au profit des entreprises, l'anticipation de taxes élevées dans le futur réduit le rendement espéré de l'investissement et atténue la stimulation de l'investissement engendrée par la réduction d'impôt. Le deuxième chapitre est écrit en collaboration avec Rui Castro. Dans cet article, nous avons quantifié les effets sur les décisions individuelles d'investis-sement et de production des entreprises ainsi que sur les agrégats macroéconomiques, d'une baisse temporaire de l'impôt sur le profit des entreprises en présence de marchés financiers imparfaits. Dans un modèle où les entreprises sont sujettes à des chocs de productivité idiosyncratiques, nous avons d'abord établi que le rationnement de crédit affecte plus les petites (jeunes) entreprises que les grandes entreprises. Pour des entreprises de même taille, les entreprises les plus productives sont celles qui souffrent le plus du manque de liquidité résultant des imperfections du marché financier. Ensuite, nous montré que pour une baisse de 1 dollar du revenu de l'impôt, l'investissement et la production augmentent respectivement de 26 et 3,5 centimes. L'effet cumulatif indique une augmentation de l'investissement et de la production agrégés respectivement de 4,6 et 7,2 centimes. Au niveau individuel, nos résultats indiquent que la politique stimule l'investissement des petites entreprises, initialement en manque de liquidité, alors qu'elle réduit l'investissement des grandes entreprises, initialement non contraintes. Le troisième chapitre est consacré à l'analyse des effets de la réforme de l'imposition des revenus d'entreprise proposée par le Trésor américain en 1992. La proposition de réforme recommande l'élimination des impôts sur les dividendes et les gains en capital et l'imposition d'une seule taxe sur le revenu des entreprises. Pour ce faire, j'ai eu recours à un modèle dynamique stochastique d'équilibre général avec marchés financiers imparfaits dans lequel les entreprises sont sujettes à des chocs idiosyncratiques de productivité. Les résultats indiquent que l'abolition des impôts sur les dividendes et les gains en capital réduisent les distorsions dans les choix d'investissement des entreprises, stimule l'investissement et entraîne une meilleure allocation du capital. Mais pour être financièrement soutenable, la réforme nécessite un relèvement du taux de l'impôt sur le profit des entreprises de 34\% à 42\%. Cette hausse du taux d'imposition décourage l'accumulation du capital. En somme, la réforme engendre une baisse de l'accumulation du capital et de la production respectivement de 8\% et 1\%. Néanmoins, elle améliore l'allocation du capital de 20\%, engendrant des gains de productivité de 1.41\% et une modeste augmentation du bien être des consommateurs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been a revival of interest in economic techniques to measure the value of a firm through the use of economic value added as a technique for measuring such value to shareholders. This technique, based upon the concept of economic value equating to total value, is founded upon the assumptions of classical liberal economic theory. Such techniques have been subject to criticism both from the point of view of the level of adjustment to published accounts needed to make the technique work and from the point of view of the validity of such techniques in actually measuring value in a meaningful context. This paper critiques economic value added techniques as a means of calculating changes in shareholder value, contrasting such techniques with more traditional techniques of measuring value added. It uses the company Severn Trent plc as an actual example in order to evaluate and contrast the techniques in action. The paper demonstrates discrepancies between the calculated results from using economic value added analysis and those reported using conventional accounting measures. It considers the merits of the respective techniques in explaining shareholder and managerial behaviour and the problems with using such techniques in considering the wider stakeholder concept of value. It concludes that this economic value added technique has merits when compared with traditional accounting measures of performance but that it does not provide the universal panacea claimed by its proponents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed source analyses of half-field pattern onset visual evoked magnetic responses (VEMR) were carried out by the authors with a view to locating the source of the largest of the components, the CIIm. The analyses were performed using a series of realistic source spaces taking into account the anatomy of the visual cortex. Accuracy was enhanced by constraining the source distributions to lie within the visual cortex only. Further constraints on the source space yielded reliable, but possibly less meaningful, solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents pressure distributions and fluid flow patterns on the shellside of a cylindrical shell-and-tube heat exchanger. The apparatus used was constructed from glass enabling direct observation of the flow using a dye release technique and had ten traversable pressure instrumented tubes permitting detailed pressure distributions to be obtained. The `exchanger' had a large tube bundle (278 tubes) and main flow areas typical of practical designs. Six geometries were studied: three baffle spacings both with and without baffle leakage. Results are also presented of three-dimensional modelling of shellside flows using the Harwell Laboratory's FLOW3D code. Flow visualisation provided flow patterns in the central plane of the bundle and adjacent to the shell wall. Comparison of these high-lighted significant radial flow variations. In particular, separated regions, originating from the baffle tips, were observed. The size of these regions was small in the bundle central plane but large adjacent to the shell wall and extended into the bypass lane. This appeared to reduce the bypass flow area and hence the bypass flow fraction. The three-dimensional flow modelling results were presented as velocity vector and isobar maps. The vector maps illustrated regions of high and low velocity which could be prone to tube vibration and fouling. Separated regions were also in evidence. A non-uniform crossflow was discovered with, in general, higher velocities in the central plane of the bundle than near the shell wall._The form of the isobar maps calculated by FLOW3D was in good agreement with experimental results. In particular, larger pressure drops occurred across the inlet than outlet of a crossflow region and were higher near the upstream than downstream baffle face. The effect of baffle spacing and baffle leakage on crossflow and window pressure drop measurements was identified. Agreement between the current measurements, previously obtained data and commonly used design correlations/models was, in general, poor. This was explained in terms of the increased understanding of shellside flow. The bulk of previous data, which dervies from small-scale rigs with few tubes, have been shown to be unrepresentative of typical commerical units. The Heat Transfer and Fluid Flow Service design program TASC provided the best predictions of the current pressure drop results. However, a number of simple one-dimensional models in TASC are, individually, questionable. Some revised models have been proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The potential future distribution of four Mediterranean pines was aimed to be modeled supported by EUFORGEN digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method), and REMO regional climate model (climatic data). The studied species were Pinus brutia, Pinus halepensis, Pinus pinaster, and Pinus pinea. The climate data were available in a 25 km resolution grid for the reference period (1961-90) and two future periods (2011-40, 2041-70). The climate model was based on the IPCC SRES A1B scenario. The model results show explicit shift of the distributions to the north in case of three of the four studied species. The future (2041-70) climate of Western Hungary seems to be suitable for Pinus pinaster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The FHA program to insure reverse mortgages has brought additional attention to the use of home equity conversion to increase income to the elderly. Using simulation, this study compares the economic consequences of the FHA reverse mortgage with two alternative conversion vehicles: sale of a remainder interest and sale-leaseback. An FHA insured plan is devised for each vehicle, structured to represent fair substitutes for the FHA mortgage. In addition, the FHA mortgage is adjusted to allow for a 4 percent annual increase in distributions to the homeowner. The viability of each plan for the homeowner, the financial institution and the FHA is investigated using different assumptions for house appreciation, tax rates, and homeowners' initial ages. For the homeowner, the return of each vehicle is compared with the choice of not employing home equity conversion. The study examines the impact of tax and accounting rules on the selection of alternatives. The study investigates the sensitivity of the FHA model to some of its assumptions.^ Although none of the vehicles is Pareato optimal, the study shows that neither the sale of a remainder interest nor the sale-leaseback is a viable alternative vehicle to the homeowner. While each of these vehicles is profitable to the financial institution, the profits are not high enough to transfer benefits to the homeowner and still be workable. The effects of tax rate, house appreciation rate, and homeowner's initial age are surprisingly small. As a general rule, none of these factors materially impact the decision of either the homeowner or the financial institution. Tax and accounting rules were found to have minimal impact on the selection of vehicles. The sensitivity analysis indicates that none of the variables studied alone is likely to materially affect the FHA's profitability. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study looks at the impact of the recent financial crisis on the short-term performance of European acquisitions. We use institutional theory and transaction cost economic theory to study whether bidders derive lower or higher returns from acquisitions announced after 2008. We investigate shareholders’ stock price reaction to 2245 deals which occurred during 2004–12 across 22 European Union countries. Our results from both univariate and multivariate analysis show that the deals announced in the post-crisis period, corresponding to the period of economic recession, generate higher returns to shareholders as compared to acquisitions announced in the pre-crisis period. We also test the relevance of the Economic and Monetary Union (EMU), that is, the Eurozone, to this value accrual during the recessionary period. We observe that non-EMU transactions obtain significantly higher gains vis-à-vis EMU transactions in the post-crisis years. Overall, announcement returns of European acquisitions have been affected by the financial crisis and the global recession; and companies that target countries with different currency regimes are likely to generate better returns from their acquisitions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans cette thèse on s’intéresse à la modélisation de la dépendance entre les risques en assurance non-vie, plus particulièrement dans le cadre des méthodes de provisionnement et en tarification. On expose le contexte actuel et les enjeux liés à la modélisation de la dépendance et l’importance d’une telle approche avec l’avènement des nouvelles normes et exigences des organismes réglementaires quant à la solvabilité des compagnies d’assurances générales. Récemment, Shi et Frees (2011) suggère d’incorporer la dépendance entre deux lignes d’affaires à travers une copule bivariée qui capture la dépendance entre deux cellules équivalentes de deux triangles de développement. Nous proposons deux approches différentes pour généraliser ce modèle. La première est basée sur les copules archimédiennes hiérarchiques, et la deuxième sur les effets aléatoires et la famille de distributions bivariées Sarmanov. Nous nous intéressons dans un premier temps, au Chapitre 2, à un modèle utilisant la classe des copules archimédiennes hiérarchiques, plus précisément la famille des copules partiellement imbriquées, afin d’inclure la dépendance à l’intérieur et entre deux lignes d’affaires à travers les effets calendaires. Par la suite, on considère un modèle alternatif, issu d’une autre classe de la famille des copules archimédiennes hiérarchiques, celle des copules totalement imbriquées, afin de modéliser la dépendance entre plus de deux lignes d’affaires. Une approche avec agrégation des risques basée sur un modèle formé d’une arborescence de copules bivariées y est également explorée. Une particularité importante de l’approche décrite au Chapitre 3 est que l’inférence au niveau de la dépendance se fait à travers les rangs des résidus, afin de pallier un éventuel risque de mauvaise spécification des lois marginales et de la copule régissant la dépendance. Comme deuxième approche, on s’intéresse également à la modélisation de la dépendance à travers des effets aléatoires. Pour ce faire, on considère la famille de distributions bivariées Sarmanov qui permet une modélisation flexible à l’intérieur et entre les lignes d’affaires, à travers les effets d’années de calendrier, années d’accident et périodes de développement. Des expressions fermées de la distribution jointe, ainsi qu’une illustration empirique avec des triangles de développement sont présentées au Chapitre 4. Aussi, nous proposons un modèle avec effets aléatoires dynamiques, où l’on donne plus de poids aux années les plus récentes, et utilisons l’information de la ligne corrélée afin d’effectuer une meilleure prédiction du risque. Cette dernière approche sera étudiée au Chapitre 5, à travers une application numérique sur les nombres de réclamations, illustrant l’utilité d’un tel modèle dans le cadre de la tarification. On conclut cette thèse par un rappel sur les contributions scientifiques de cette thèse, tout en proposant des angles d’ouvertures et des possibilités d’extension de ces travaux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Finance is one of the fastest growing areas in modern applied mathematics with real world applications. The interest of this branch of applied mathematics is best described by an example involving shares. Shareholders of a company receive dividends which come from the profit made by the company. The proceeds of the company, once it is taken over or wound up, will also be distributed to shareholders. Therefore shares have a value that reflects the views of investors about the likely dividend payments and capital growth of the company. Obviously such value will be quantified by the share price on stock exchanges. Therefore financial modelling serves to understand the correlations between asset and movements of buy/sell in order to reduce risk. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. There are other financial activities and it is not an intention of this paper to discuss all of these activities. The main concern of this paper is to propose a parallel algorithm for the numerical solution of an European option. This paper is organised as follows. First, a brief introduction is given of a simple mathematical model for European options and possible numerical schemes of solving such mathematical model. Second, Laplace transform is applied to the mathematical model which leads to a set of parametric equations where solutions of different parametric equations may be found concurrently. Numerical inverse Laplace transform is done by means of an inversion algorithm developed by Stehfast. The scalability of the algorithm in a distributed environment is demonstrated. Third, a performance analysis of the present algorithm is compared with a spatial domain decomposition developed particularly for time-dependent heat equation. Finally, a number of issues are discussed and future work suggested.