925 resultados para Multi factor affine processes
Resumo:
This paper provides the first investigation about bond mutual fund performance during recession and expansion periods separately. Based on multi-factor performance evaluation models, results show that bond funds significantly underperform the market during both phases of the business cycle. Nevertheless, unlike equity funds, bond funds exhibit considerably higher alphas during good economic states than during market downturns. These results, however, seem entirely driven by the global financial crisis subperiod. In contrast, during the recession associated to the Euro sovereign debt crisis, bond funds are able to accomplish neutral performance. This improved performance throughout the debt crisis seems to be related to more conservative investment strategies, which reflect an increase in managers’ risk aversion.
Resumo:
This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.
Resumo:
Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context.
Resumo:
Emotion although being an important factor in our every day life it is many times forgotten in the development of systems to be used by persons. In this work we present an architecture for a ubiquitous group decision support system able to support persons in group decision processes. The system considers the emotional factors of the intervenient participants, as well as the argumentation between them. Particular attention will be taken to one of components of this system: the multi-agent simulator, modeling the human participants, considering emotional characteristics, and allowing the exchanges of hypothetic arguments among the participants.
Resumo:
This study investigated group processes as potential mediators or moderators of positive development outcome and negative reduction intervention response by evaluating the utility of a group measure modified from a widely known measure of group impact found in the group therapy research literature. Four group processes were of primary interest, (1) Group Impact; (2) Facilitator Impact; (3) Skills Impact; and (4) Exploration Impact as assessed by the Session Evaluation Form (SEF). Outcome measures included the Personally Expressive Activities Questionnaire (PEAQ), Erikson Psycho-Social Index (EPSI) and the Zill Behavior Items, Behavior Problem Index (ZBI (BPI)). The sample consisted of 121 multi-ethnic participants drawn from four alternative high schools from the Miami-Dade County Public School system. Utilizing a Latent Growth Curve Modeling approach with Structural Equation Modeling (SEM) statistics, preliminary analyses were conducted to evaluate the psychometric properties of the SEF and its role in the mediation or moderation of intervention outcome. Preliminary results revealed evidence of a single higher order factor representing a "General" global reaction, which was hypothesized to be a "Positive Group Climate" construct to the program as opposed to the four distinct group processes that were initially hypothesized to affect outcomes. The results of the evaluation of the mediation or moderation role of intervention outcome of the single "General" global latent factor ("Positive Group Climate" construct) did not significantly predict treatment response on any of the outcome variables. Nevertheless, the evidence of an underlying "General" global latent factor ("Positive Group Climate" construct) has important future directions for research on positive youth development programs as well as in group therapy research.
Resumo:
Field lab: Business project
Resumo:
The delivery kinetics of growth factors has been suggested to play an important role in the regeneration of peripheral nerves following axotomy. In this context, we designed a nerve conduit (NC) with adjustable release kinetics of nerve growth factor (NGF). A multi-ply system was designed where NC consisting of a polyelectrolyte alginate/chitosan complex was coated with layers of poly(lactide-co-glycolide) (PLGA) to control the release of embedded NGF. Prior to assessing the in vitro NGF release from NC, various release test media, with and without stabilizers for NGF, were evaluated to ensure adequate quantification of NGF by ELISA. Citrate (pH 5.0) and acetate (pH 5.5) buffered saline solutions containing 0.05% Tween 20 yielded the most reliable results for ELISA active NGF. The in vitro release experiments revealed that the best results in terms of reproducibility and release control were achieved when the NGF was embedded between two PLGA layers and the ends of the NC tightly sealed by the PLGA coatings. The release kinetics could be efficiently adjusted by accommodating NGF at different radial locations within the NC. A sustained release of bioactive NGF in the low nanogram per day range was obtained for at least 15days. In conclusion, the developed multi-ply NGF loaded NC is considered a suitable candidate for future implantation studies to gain insight into the relationship between local growth factor availability and nerve regeneration.
Resumo:
The abandonment of agricultural land in mountainous areas has been an outstanding problem along the last century and has captured the attention of scientists, technicians and administrations, for the dramatic consequences sometimes occurred due to soil instability, steep slopes, rainfall regimes and wildfires. Hidromorfological and pedological alterations causing exceptional floods and accelerated erosion processes has therefore been studied, identifying the cause in the loss of landscape heterogeneity. Through the disappearance of agricultural works and drainage maintenance, slope stability has resulted severely affected. The mechanization of agriculture has caused the displacement of vines, olives and corks trees cultivation in terraced areas along the Mediterranean catchment towards more economically suitable areas. On the one hand, land use and management changes have implicated sociological changes as well, transforming areas inhabited by agricultural communities into deserted areas where the colonization of disorganized spontaneous vegetation has buried a valuable rural patrimony. On the other hand, lacking of planning and management of the abandoned areas has produced badlands and infertile soils due to wildfire and high erosion rates strongly degrading the whole ecosystems. In other cases, after land abandonment a process of soil regeneration has been recorded. Investigations have been conducted in a part of NE Spain where extended areas of terraced soils previously cultivated have been abandoned in the last century. The selected environments were semi-abandoned vineyards, semi-abandoned olive groves, abandoned stands of cork trees, abandoned stands of pine trees, scrubland of Cistaceaea, scrubland of Ericaceaea, and pasture. The research work was focused on the study of most relevant physical, chemical and biological soil properties, as well as runoff and erosion under soils with different plant cover to establish the abandonment effect on soil quality, due to the peculiarity and vulnerability of these soils with a much reduced depth. The period of observation was carried out from autumn 2009 to autumn 2010. The sediment concentration of soil erosion under vines was recorded as 34.52 g/l while under pasture it was 4.66 g/l. In addition, the soil under vines showed the least amount of organic matter, which was 12 times lower than all other soil environments. The carbon dioxide (CO2) and total glomalin (TG) ratio to soil organic carbon (SOC) in this soil was 0.11 and 0.31 respectively. However, the soil under pasture contained a higher amount of organic matter and showed that the CO2 and TG ratio to SOC was 0.02 and 0.11 respectively indicating that the soil under pasture better preserves the soil carbon pool. A similar trend was found in the intermediate soils in the sequence of land use change and abandonment. Soil structural stability increased in the two soil fractions investigated (0.25-2.00 mm, 2.0-5.6 mm) especially in those soils that did not undergo periodical perturbations like wildfires. Soil quality indexes were obtained by using relevant physical and chemical soil parameters. Factor analysis carried out to study the relationship between all soil parameters allowed to related variables and environments and identify those areas that better contribute to soil quality towards others that may need more attention to avoid further degradation processes
Resumo:
Crystal properties, product quality and particle size are determined by the operating conditions in the crystallization process. Thus, in order to obtain desired end-products, the crystallization process should be effectively controlled based on reliable kinetic information, which can be provided by powerful analytical tools such as Raman spectrometry and thermal analysis. The present research work studied various crystallization processes such as reactive crystallization, precipitation with anti-solvent and evaporation crystallization. The goal of the work was to understand more comprehensively the fundamentals, phenomena and utilizations of crystallization, and establish proper methods to control particle size distribution, especially for three phase gas-liquid-solid crystallization systems. As a part of the solid-liquid equilibrium studies in this work, prediction of KCl solubility in a MgCl2-KCl-H2O system was studied theoretically. Additionally, a solubility prediction model by Pitzer thermodynamic model was investigated based on solubility measurements of potassium dihydrogen phosphate with the presence of non-electronic organic substances in aqueous solutions. The prediction model helps to extend literature data and offers an easy and economical way to choose solvent for anti-solvent precipitation. Using experimental and modern analytical methods, precipitation kinetics and mass transfer in reactive crystallization of magnesium carbonate hydrates with magnesium hydroxide slurry and CO2 gas were systematically investigated. The obtained results gave deeper insight into gas-liquid-solid interactions and the mechanisms of this heterogeneous crystallization process. The research approach developed can provide theoretical guidance and act as a useful reference to promote development of gas-liquid reactive crystallization. Gas-liquid mass transfer of absorption in the presence of solid particles in a stirred tank was investigated in order to gain understanding of how different-sized particles interact with gas bubbles. Based on obtained volumetric mass transfer coefficient values, it was found that the influence of the presence of small particles on gas-liquid mass transfer cannot be ignored since there are interactions between bubbles and particles. Raman spectrometry was successfully applied for liquid and solids analysis in semi-batch anti-solvent precipitation and evaporation crystallization. Real-time information such as supersaturation, formation of precipitates and identification of crystal polymorphs could be obtained by Raman spectrometry. The solubility prediction models, monitoring methods for precipitation and empirical model for absorption developed in this study together with the methodologies used gives valuable information for aspects of industrial crystallization. Furthermore, Raman analysis was seen to be a potential controlling method for various crystallization processes.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
In 1967 a novel scheme was proposed for controlling processes with large pure time delay (Fellgett et al, 1967) and some of the constituent parts of the scheme were investigated (Swann, 1970; Atkinson et al, 1973). At that time the available computational facilities were inadequate for the scheme to be implemented practically, but with the advent of modern microcomputers the scheme becomes feasible. This paper describes recent work (Mitchell, 1987) in implementing the scheme in a new multi-microprocessor configuration and shows the improved performance it provides compared with conventional three-term controllers.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)