939 resultados para Optimal Portfolio Selection
Resumo:
The real earth is far away from an ideal elastic ball. The movement of structures or fluid and scattering of thin-layer would inevitably affect seismic wave propagation, which is demonstrated mainly as energy nongeometrical attenuation. Today, most of theoretical researches and applications take the assumption that all media studied are fully elastic. Ignoring the viscoelastic property would, in some circumstances, lead to amplitude and phase distortion, which will indirectly affect extraction of traveltime and waveform we use in imaging and inversion. In order to investigate the response of seismic wave propagation and improve the imaging and inversion quality in complex media, we need not only consider into attenuation of the real media but also implement it by means of efficient numerical methods and imaging techniques. As for numerical modeling, most widely used methods, such as finite difference, finite element and pseudospectral algorithms, have difficulty in dealing with problem of simultaneously improving accuracy and efficiency in computation. To partially overcome this difficulty, this paper devises a matrix differentiator method and an optimal convolutional differentiator method based on staggered-grid Fourier pseudospectral differentiation, and a staggered-grid optimal Shannon singular kernel convolutional differentiator by function distribution theory, which then are used to study seismic wave propagation in viscoelastic media. Results through comparisons and accuracy analysis demonstrate that optimal convolutional differentiator methods can solve well the incompatibility between accuracy and efficiency, and are almost twice more accurate than the same-length finite difference. They can efficiently reduce dispersion and provide high-precision waveform data. On the basis of frequency-domain wavefield modeling, we discuss how to directly solve linear equations and point out that when compared to the time-domain methods, frequency-domain methods would be more convenient to handle the multi-source problem and be much easier to incorporate medium attenuation. We also prove the equivalence of the time- and frequency-domain methods by using numerical tests when assumptions with non-relaxation modulus and quality factor are made, and analyze the reason that causes waveform difference. In frequency-domain waveform inversion, experiments have been conducted with transmission, crosshole and reflection data. By using the relation between media scales and characteristic frequencies, we analyze the capacity of the frequency-domain sequential inversion method in anti-noising and dealing with non-uniqueness of nonlinear optimization. In crosshole experiments, we find the main sources of inversion error and figure out how incorrect quality factor would affect inverted results. When dealing with surface reflection data, several frequencies have been chosen with optimal frequency selection strategy, with which we use to carry out sequential and simultaneous inversions to verify how important low frequency data are to the inverted results and the functionality of simultaneous inversion in anti-noising. Finally, I come with some conclusions about the whole work I have done in this dissertation and discuss detailly the existing and would-be problems in it. I also point out the possible directions and theories we should go and deepen, which, to some extent, would provide a helpful reference to researchers who are interested in seismic wave propagation and imaging in complex media.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
We assess different policies for reducing carbon dioxide emissions and promoting innovation and diffusion of renewable energy. We evaluate the relative performance of policies according to incentives provided for emissions reduction, efficiency, and other outcomes. We also assess how the nature of technological progress through learning and research and development (R&D), and the degree of knowledge spillovers, affects the desirability of different policies. Due to knowledge spillovers, optimal policy involves a portfolio of different instruments targeted at emissions, learning, and R&D. Although the relative cost of individual policies in achieving reductions depends on parameter values and the emissions target, in a numerical application to the U.S. electricity sector, the ranking is roughly as follows: (1) emissions price, (2) emissions performance standard, (3) fossil power tax, (4) renewables share requirement, (5) renewables subsidy, and (6) R&D subsidy. Nonetheless, an optimal portfolio of policies achieves emissions reductions at a significantly lower cost than any single policy. © 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper we present a novel method for performing speaker recognition with very limited training data and in the presence of background noise. Similarity-based speaker recognition is considered so that speaker models can be created with limited training speech data. The proposed similarity is a form of cosine similarity used as a distance measure between speech feature vectors. Each speech frame is modelled using subband features, and into this framework, multicondition training and optimal feature selection are introduced, making the system capable of performing speaker recognition in the presence of realistic, time-varying noise, which is unknown during training. Speaker identi?cation experiments were carried out using the SPIDRE database. The performance of the proposed new system for noise compensation is compared to that of an oracle model; the speaker identi?cation accuracy for clean speech by the new system trained with limited training data is compared to that of a GMM trained with several minutes of speech. Both comparisons have demonstrated the effectiveness of the new model. Finally, experiments were carried out to test the new model for speaker identi?cation given limited training data and with differing levels and types of realistic background noise. The results have demonstrated the robustness of the new system.
Resumo:
A silicon implementation of the Approximate Rotations algorithm capable of carrying the computational load of algorithms such as QRD and SVD, within the real-time realisation of applications such as Adaptive Beamforming, is described. A modification to the original Approximate Rotations algorithm to simplify the method of optimal angle selection is proposed. Analysis shows that fewer iterations of the Approximate Rotations algorithm are required compared with the conventional CORDIC algorithm to achieve similar degrees of accuracy. The silicon design studies undertaken provide direct practical evidence of superior performance with the Approximate Rotations algorithm, requiring approximately 40% of the total computation time of the conventional CORDIC algorithm, for a similar silicon area cost. © 2004 IEEE.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.
On the complexity of solving polytree-shaped limited memory influence diagrams with binary variables
Resumo:
Influence diagrams are intuitive and concise representations of structured decision problems. When the problem is non-Markovian, an optimal strategy can be exponentially large in the size of the diagram. We can avoid the inherent intractability by constraining the size of admissible strategies, giving rise to limited memory influence diagrams. A valuable question is then how small do strategies need to be to enable efficient optimal planning. Arguably, the smallest strategies one can conceive simply prescribe an action for each time step, without considering past decisions or observations. Previous work has shown that finding such optimal strategies even for polytree-shaped diagrams with ternary variables and a single value node is NP-hard, but the case of binary variables was left open. In this paper we address such a case, by first noting that optimal strategies can be obtained in polynomial time for polytree-shaped diagrams with binary variables and a single value node. We then show that the same problem is NP-hard if the diagram has multiple value nodes. These two results close the fixed-parameter complexity analysis of optimal strategy selection in influence diagrams parametrized by the shape of the diagram, the number of value nodes and the maximum variable cardinality.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
We extend Cass and Stiglitz’s analysis of preference-based mutual fund separation. We show that high degrees of fund separation can be constructed by adding inverse marginal utility functions exhibiting lower degrees of separation. However, this method does not allow us to find all utility functions satisfying fund separation. In general, we do not know how to write the primal utility functions in these models in closed form, but we can do so in the special case of SAHARA utility defined by Chen et al. and for a new class of GOBI preferences introduced here. We show that there is money separation (in which the riskless asset can be one of the funds) if and only if there is a fund (which may not be the riskless asset) with a constant allocation as wealth changes.
Resumo:
A teoria da carteira de Harry Markowitz, originalmente publicada em 1952 no Journal of Finance, "Portfolio Selection", desenvolveu um método de solução geral do problema da estrutura das carteiras, que engloba o tratamento quantificado do risco. Propõe a determinação de um conjunto de carteiras eficientes empregando unicamente os conceitos de média para a rentabilidade que se espera obter e de variância (ou desvio padrão) para a incerteza associada a essa rentabilidade, e daí a denominação de média-variância à análise de Markowitz. Chamou também a atenção para a diversificação das carteiras, mostrando como um investidor pode reduzir o desvio padrão da rendibilidade da carteira através da escolha de acções cujas variações não sejam exactamente paralelas.
Resumo:
Cette thèse porte sur les questions d'évaluation et de couverture des options dans un modèle exponentiel-Lévy avec changements de régime. Un tel modèle est construit sur un processus additif markovien un peu comme le modèle de Black- Scholes est basé sur un mouvement Brownien. Du fait de l'existence de plusieurs sources d'aléa, nous sommes en présence d'un marché incomplet et ce fait rend inopérant les développements théoriques initiés par Black et Scholes et Merton dans le cadre d'un marché complet. Nous montrons dans cette thèse que l'utilisation de certains résultats de la théorie des processus additifs markoviens permet d'apporter des solutions aux problèmes d'évaluation et de couverture des options. Notamment, nous arrivons à caracté- riser la mesure martingale qui minimise l'entropie relative à la mesure de probabilit é historique ; aussi nous dérivons explicitement sous certaines conditions, le portefeuille optimal qui permet à un agent de minimiser localement le risque quadratique associé. Par ailleurs, dans une perspective plus pratique nous caract érisons le prix d'une option Européenne comme l'unique solution de viscosité d'un système d'équations intégro-di érentielles non-linéaires. Il s'agit là d'un premier pas pour la construction des schémas numériques pour approcher ledit prix.
Resumo:
La leucémie aiguë lymphoblastique (LAL) est le cancer pédiatrique le plus fréquent. Elle est la cause principale de mortalité liée au cancer chez les enfants due à un groupe de patient ne répondant pas au traitement. Les patients peuvent aussi souffrir de plusieurs toxicités associées à un traitement intensif de chimiothérapie. Les études en pharmacogénétique de notre groupe ont montré une corrélation tant individuelle que combinée entre les variants génétiques particuliers d’enzymes dépendantes du folate, particulièrement la dihydrofolate réductase (DHFR) ainsi que la thymidylate synthase (TS), principales cibles du méthotrexate (MTX) et le risque élevé de rechute chez les patients atteints de la LAL. En outre, des variations dans le gène ATF5 impliqué dans la régulation de l’asparagine synthetase (ASNS) sont associées à un risque plus élevé de rechute ou à une toxicité ASNase dépendante chez les patients ayant reçu de l’asparaginase d’E.coli (ASNase). Le but principal de mon projet de thèse est de comprendre davantage d’un point de vue fonctionnel, le rôle de variations génétiques dans la réponse thérapeutique chez les patients atteints de la LAL, en se concentrant sur deux composants majeurs du traitement de la LAL soit le MTX ainsi que l’ASNase. Mon objectif spécifique était d’analyser une association trouvée dans des paramètres cliniques par le biais d’essais de prolifération cellulaire de lignées cellulaires lymphoblastoïdes (LCLs, n=93) et d’un modèle murin de xénogreffe de la LAL. Une variation génétique dans le polymorphisme TS (homozygosité de l’allèle de la répétition triple 3R) ainsi que l’haplotype *1b de DHFR (défini par une combinaison particulière d’allèle dérivé de six sites polymorphiques dans le promoteur majeur et mineur de DHFR) et de leurs effets sur la sensibilité au MTX ont été évalués par le biais d’essais de prolifération cellulaire. Des essais in vitro similaires sur la réponse à l’ASNase de E. Coli ont permis d’évaluer l’effet de la variation T1562C de la région 5’UTR de ATF5 ainsi que des haplotypes particuliers du gène ASNS (définis par deux variations génétiques et arbitrairement appelés haplotype *1). Le modèle murin de xénogreffe ont été utilisé pour évaluer l’effet du génotype 3R3R du gène TS. L’analyse de polymorphismes additionnels dans le gène ASNS a révélé une diversification de l’haplotype *1 en 5 sous-types définis par deux polymorphismes (rs10486009 et rs6971012,) et corrélé avec la sensibilité in vitro à l’ASNase et l’un d’eux (rs10486009) semble particulièrement important dans la réduction de la sensibilité in vitro à l’ASNase, pouvant expliquer une sensibilité réduite de l’haplotype *1 dans des paramètres cliniques. Aucune association entre ATF5 T1562C et des essais de prolifération cellulaire en réponse à ASNase de E.Coli n’a été détectée. Nous n’avons pas détecté une association liée au génotype lors d’analyse in vitro de sensibilité au MTX. Par contre, des résultats in vivo issus de modèle murin de xénogreffe ont montré une relation entre le génotype TS 3R/3R et la résistance de manière dose-dépendante au traitement par MTX. Les résultats obtenus ont permis de fournir une explication concernant un haut risque significatif de rechute rencontré chez les patients au génotype TS 3R/3R et suggèrent que ces patients pourraient recevoir une augmentation de leur dose de MTX. À travers ces expériences, nous avons aussi démontré que les modèles murins de xénogreffe peuvent servir comme outil préclinique afin d’explorer l’option d’un traitement individualisé. En conclusion, la connaissance acquise à travers mon projet de thèse a permis de confirmer et/ou d’identifier quelques variants dans la voix d’action du MTX et de l’ASNase qui pourraient faciliter la mise en place de stratégies d’individualisation de la dose, permettant la sélection d’un traitement optimum ou moduler la thérapie basé sur la génétique individuelle.
Resumo:
Dans cette thèse, nous étudions quelques problèmes fondamentaux en mathématiques financières et actuarielles, ainsi que leurs applications. Cette thèse est constituée de trois contributions portant principalement sur la théorie de la mesure de risques, le problème de l’allocation du capital et la théorie des fluctuations. Dans le chapitre 2, nous construisons de nouvelles mesures de risque cohérentes et étudions l’allocation de capital dans le cadre de la théorie des risques collectifs. Pour ce faire, nous introduisons la famille des "mesures de risque entropique cumulatifs" (Cumulative Entropic Risk Measures). Le chapitre 3 étudie le problème du portefeuille optimal pour le Entropic Value at Risk dans le cas où les rendements sont modélisés par un processus de diffusion à sauts (Jump-Diffusion). Dans le chapitre 4, nous généralisons la notion de "statistiques naturelles de risque" (natural risk statistics) au cadre multivarié. Cette extension non-triviale produit des mesures de risque multivariées construites à partir des données financiéres et de données d’assurance. Le chapitre 5 introduit les concepts de "drawdown" et de la "vitesse d’épuisement" (speed of depletion) dans la théorie de la ruine. Nous étudions ces concepts pour des modeles de risque décrits par une famille de processus de Lévy spectrallement négatifs.
Resumo:
En este documento se explica el rol de las compañías aseguradoras colombianas dentro del sistema pensional y se busca, a través de la comprensión de la evolución del entorno macroeconómico y del marco regulatorio, identificar los retos que enfrentan. Los retos explicados en el documento son tres: el reto de la rentabilidad, el reto que plantean los cambios relativamente frecuentes de la regulación, y el reto del “calce”. El documento se enfoca principalmente en el reto de la rentabilidad y desarrolla un ejercicio de frontera eficiente que utiliza retornos esperados calculados a partir de la metodología de Damodaran (2012). Los resultados del ejercicio soportan la idea de que en efecto los retornos esperados serán menores para cualquier nivel de riesgo y sugiere que ante tal panorama, la relajación de las restricciones impuestas por el Régimen de inversiones podría alivianar los preocupaciones de las compañías aseguradoras en esta materia. Para los otros dos retos también se sugieren alternativas: el Algorithmic Trading para el caso del reto que impone los cambios en la regulación, y las Asociaciones Público-Privadas para abordar el reto del “calce”.
Resumo:
El WACC o Coste Medio Ponderado de Capital es la tasa a la que se deben descontar los flujos para evaluar un proyecto o empresa. Para calcular esta tasa es necesario determinar el costo de la deuda y el costo de los recursos propios de la compañía; el costo de la deuda es la tasa actual del mercado que la empresa está pagando por su deuda, sin embargo el costo de los recursos propios podría ser difícil y más complejo de estimar ya que no existe un costo explícito. En este trabajo se presenta un panorama de las teorías propuestas a lo largo de la historia para calcular el costo de los recursos propios. Como caso particular, se estimará el costo de los recursos propios sin apalancamiento financiero de seis empresas francesas que no cotizan en bolsa y pertenecientes al sector de Servicios a la Persona (SAP). Para lograr lo anterior, se utilizará el Proceso de Análisis Jerárquico (AHP) y el Modelo de Valoración del Precio de los Activos Financieros (CAPM) con base en lo presentado por Martha Pachón (2013) en “Modelo alternativo para calcular el costo de los recursos propios”.