726 resultados para Subjective expected utility
Resumo:
In this paper we study the dynamic hedging problem using three different utility specifications: stochastic differential utility, terminal wealth utility, and we propose a particular utility transformation connecting both previous approaches. In all cases, we assume Markovian prices. Stochastic differential utility, SDU, impacts the pure hedging demand ambiguously, but decreases the pure speculative demand, because risk aversion increases. We also show that consumption decision is, in some sense, independent of hedging decision. With terminal wealth utility, we derive a general and compact hedging formula, which nests as special all cases studied in Duffie and Jackson (1990). We then show how to obtain their formulas. With the third approach we find a compact formula for hedging, which makes the second-type utility framework a particular case, and show that the pure hedging demand is not impacted by this specification. In addition, with CRRA- and CARA-type utilities, the risk aversion increases and, consequently the pure speculative demand decreases. If futures price are martingales, then the transformation plays no role in determining the hedging allocation. We also derive the relevant Bellman equation for each case, using semigroup techniques.
Resumo:
In a two-period economy with incomplete markets and possibility of default we consider the two classical ways to enforce the honor of financial commitments: by using utility penalties and by using collateral requirements that borrowers have to fulfill. Firstly, we prove that any equilibrium in an economy with collateral requirements is also equilibrium in a non-collateralized economy where each agent is penalized (rewarded) in his utility if his delivery rate is lower (greater) than the payment rate of the financial market. Secondly, we prove the converse: any equilibrium in an economy with utility penalties is also equilibrium in a collateralized economy. For this to be true the payoff function and initial endowments of the agents must be modified in a quite natural way. Finally, we prove that the equilibrium in the economy with collateral requirements attains the same welfare as in the new economy with utility penalties.
Resumo:
Esta tese é constituída por três ensaios. O primeiro ensaio analisa a informação pública disponível sobre o risco das carteiras de crédito dos bancos brasileiros, sendo dividido em dois capítulos. O primeiro analisa a limitação da informação pública disponibilizada pelos bancos e pelo Banco Central, quando comparada a informação gerencial disponível internamente pelos bancos. Concluiu-se que existe espaço para o aumento da transparência na divulgação das informações, fato que vem ocorrendo gradativamente no Brasil através de novas normas relacionadas ao Pilar 3 de Basileia II e à divulgação de informações mais detalhas pelo Bacen, como, por exemplo, aquelas do “Top50” . A segunda parte do primeiro ensaio mostra a discrepância entre o índice de inadimplência contábil (NPL) e a probabilidade de inadimplência (PD) e também discute a relação entre provisão e perda esperada. Através da utilização de matrizes de migração e de uma simulação baseada na sobreposição de safras de carteira de crédito de grandes bancos, concluiu-se que o índice de inadimplência subestima a PD e que a provisão constituída pelos bancos é menor que a perda esperada do SFN. O segundo ensaio relaciona a gestão de risco à discriminação de preço. Foi desenvolvido um modelo que consiste em um duopólio de Cournot em um mercado de crédito de varejo, em que os bancos podem realizar discriminação de terceiro grau. Neste modelo, os potenciais tomadores de crédito podem ser de dois tipos, de baixo ou de alto risco, sendo que tomadores de baixo risco possuem demanda mais elástica. Segundo o modelo, se o custo para observar o tipo do cliente for alto, a estratégia dos bancos será não discriminar (pooling equilibrium). Mas, se este custo for suficientemente baixo, será ótimo para os bancos cobrarem taxas diferentes para cada grupo. É argumentado que o Acordo de Basileia II funcionou como um choque exógeno que deslocou o equilíbrio para uma situação com maior discriminação. O terceiro ensaio é divido em dois capítulos. O primeiro discute a aplicação dos conceitos de probabilidade subjetiva e incerteza Knigthiana a modelos de VaR e a importância da avaliação do “risco de modelo”, que compreende os riscos de estimação, especificação e identificação. O ensaio propõe que a metodologia dos “quatro elementos” de risco operacional (dados internos, externos, ambiente de negócios e cenários) seja estendida à mensuração de outros riscos (risco de mercado e risco de crédito). A segunda parte deste último ensaio trata da aplicação do elemento análise de cenários para a mensuração da volatilidade condicional nas datas de divulgação econômica relevante, especificamente nos dias de reuniões do Copom.
Resumo:
This paper investigates the role of consumption-wealth ratio on predicting future stock returns through a panel approach. We follow the theoretical framework proposed by Lettau and Ludvigson (2001), in which a model derived from a nonlinear consumer’s budget constraint is used to settle the link between consumption-wealth ratio and stock returns. Using G7’s quarterly aggregate and financial data ranging from the first quarter of 1981 to the first quarter of 2014, we set an unbalanced panel that we use for both estimating the parameters of the cointegrating residual from the shared trend among consumption, asset wealth and labor income, cay, and performing in and out-of-sample forecasting regressions. Due to the panel structure, we propose different methodologies of estimating cay and making forecasts from the one applied by Lettau and Ludvigson (2001). The results indicate that cay is in fact a strong and robust predictor of future stock return at intermediate and long horizons, but presents a poor performance on predicting one or two-quarter-ahead stock returns.
Resumo:
Using the theoretical framework of Lettau and Ludvigson (2001), we perform an empirical investigation on how widespread is the predictability of cay {a modi ed consumption-wealth ratio { once we consider a set of important countries from a global perspective. We chose to work with the set of G7 countries, which represent more than 64% of net global wealth and 46% of global GDP at market exchange rates. We evaluate the forecasting performance of cay using a panel-data approach, since applying cointegration and other time-series techniques is now standard practice in the panel-data literature. Hence, we generalize Lettau and Ludvigson's tests for a panel of important countries. We employ macroeconomic and nancial quarterly data for the group of G7 countries, forming an unbalanced panel. For most countries, data is available from the early 1990s until 2014Q1, but for the U.S. economy it is available from 1981Q1 through 2014Q1. Results of an exhaustive empirical investigation are overwhelmingly in favor of the predictive power of cay in forecasting future stock returns and excess returns.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Technological innovation promotes the generation of economic value by creating a new product, process or organizational management model, being classified as dynamic and multidimensional. Government intervention has the role of acting through government grant programs to foster the integration of innovative processes in small companies, due to the high costs and risks of development, strengthening the country`s economy in this phase. The distribution of this grant is determined by criteria, based especially in subjective judgments, which are based on the beliefs and perceptions about the technological opportunities and market actors involved in the process, being very difficult to measure the probability of success of the project under evaluation. This study aims to identify the most relevant selection criteria that must be inserted in grants programs at Rio Grande do Norte executed by Fundação de Pesquisa do Rio Grande do Norte (FAPERN). Initially, there was a systematization of 18 countries, covering 41 programs in foreign countries and 29 in Brazil. Based on the data collected, we conducted one survey containing four programs of FAPERN (INOVA I, INOVA II , INOVA III and INOVA IV), covering 44 companies and analyzing their responses according to the Likert scale , obtaining the degree of importance given by the respondent to each of the criteria in the questionnaire . As a result, drew up a proposal for new criteria to be used in the next FAPERN´s grants, containing 13 new criteria. It is expected, therefore, to contribute to a better spending of public funds invested in companies subsidized in Brazil
Resumo:
The knowledge management has received major attention from product designers because many of the activities within this process have to be creative and, therefore, they depend basically on the knowledge of the people who are involved in the process. Moreover, Product Development Process (PDP) is one of the activities in which knowledge management manifests in the most critical form once it had the intense application of the knowledge. As a consequence, this thesis analyzes the knowledge management aiming to improve the PDP and it also proposes a theoretical model of knowledge management. This model uses five steps (creation, maintenance, dissemination, utilization and discard) through the verification of the occurrence of four types of knowledge conversion (socialization, externalization, combination and internalization) that it will improve the knowledge management in this process. The intellectual capital in Small and Medium Enterprises (SMEs) managed efficiently and with the participation of all employees has become the mechanism of the creation and transference processes of knowledge, supporting and, consequently, improving the PDP. The expected results are an effective and efficient application of the proposed model for the creation of the knowledge base within an organization (organizational memory) aiming a better performance of the PDP. In this way, it was carried out an extensive analysis of the knowledge management (instrument of qualitative and subjective evaluation) within the Design department of a Brazilian company (SEBRAE/RN). This analysis aimed to know the state-of-the-art of the Design department regarding the use of knowledge management. This step was important in order to evaluate in the level of the evolution of the department related to the practical use of knowledge management before implementing the proposed theoretical model and its methodology. At the end of this work, based on the results of the diagnosis, a knowledge management system is suggested to facilitate the knowledge sharing within the organization, in order words, the Design department
Resumo:
The knowledge management has received major attention from product designers because many of the activities within this process have to be creative and, therefore, they depend basically on the knowledge of the people who are involved in the process. Moreover, Product Development Process (PDP) is one of the activities in which knowledge management manifests in the most critical form once it had the intense application of the knowledge. As a consequence, this thesis analyzes the knowledge management aiming to improve the PDP and it also proposes a theoretical model of knowledge management. This model uses five steps (creation, maintenance, dissemination, utilization and discard) through the verification of the occurrence of four types of knowledge conversion (socialization, externalization, combination and internalization) that it will improve the knowledge management in this process. The intellectual capital in Small and Medium Enterprises (SMEs) managed efficiently and with the participation of all employees has become the mechanism of the creation and transference processes of knowledge, supporting and, consequently, improving the PDP. The expected results are an effective and efficient application of the proposed model for the creation of the knowledge base within an organization (organizational memory) aiming a better performance of the PDP. In this way, it was carried out an extensive analysis of the knowledge management (instrument of qualitative and subjective evaluation) within the Design department of a Brazilian company (SEBRAE/RN). This analysis aimed to know the state-of-the-art of the Design department regarding the use of knowledge management. This step was important in order to evaluate in the level of the evolution of the department related to the practical use of knowledge management before implementing the proposed theoretical model and its methodology. At the end of this work, based on the results of the diagnosis, a knowledge management system is suggested to facilitate the knowledge sharing within the organization, in order words, the Design department
Resumo:
The increase in survival time and cure requires more extensive care about the quality of life of cancer patients, which begins soon after diagnosis. Thus, it seems reasonable to the emphasis on development of studies covering the psychosocial variables, such as stigma, treatment of childhood cancer aiming thereby to the attention of the overall needs of the child. Thus, this research aims to investigate the perception of stigma and quality of life in children with cancer. This is a cross-sectional research and understanding of the descriptive type, the type specimen being adopted for convenience. This consisted of thirty children with cancer and thirty children without chronic disease. The instruments used were the Quality of Life Questionnaire, the Perceived Stigma Scale and Technical Drawing Story with a Theme. The results indicate that the chronic condition, no interfered significantly in satisfaction with the quality of life in children with cancer and identified that the quality of life is not related to the stigma. Comparison with children with no chronic disease with infants with cancer, no significant differences were observed. However, the group mean contrast was lower, suggesting a greater impairment in quality of life of children with cancer compared to those without chronic disease. It is worth noting that the psychosocial effects and the limitations imposed by disease and treatment are presented as important factors in the design mode of subjective manifestations of children with cancer. Therefore, it is expected that knowledge elucidated by this study will assist, greatly to the promotion of improved emotional, biological and social development itself and the involvement of children with cancer treatment
Resumo:
The process for choosing the best components to build systems has become increasingly complex. It becomes more critical if it was need to consider many combinations of components in the context of an architectural configuration. These circumstances occur, mainly, when we have to deal with systems involving critical requirements, such as the timing constraints in distributed multimedia systems, the network bandwidth in mobile applications or even the reliability in real-time systems. This work proposes a process of dynamic selection of architectural configurations based on non-functional requirements criteria of the system, which can be used during a dynamic adaptation. This proposal uses the MAUT theory (Multi-Attribute Utility Theory) for decision making from a finite set of possibilities, which involve multiple criteria to be analyzed. Additionally, it was proposed a metamodel which can be used to describe the application s requirements in terms of the non-functional requirements criteria and their expected values, to express them in order to make the selection of the desired configuration. As a proof of concept, it was implemented a module that performs the dynamic choice of configurations, the MoSAC. This module was implemented using a component-based development approach (CBD), performing a selection of architectural configurations based on the proposed selection process involving multiple criteria. This work also presents a case study where an application was developed in the context of Digital TV to evaluate the time spent on the module to return a valid configuration to be used in a middleware with autoadaptative features, the middleware AdaptTV
Resumo:
Objective: To evaluate the influence of alternative erasing times of DenOptix (R) (Dentsply/Gendex, Chicargo, IL) digital plates oil subjective image quality and the probability of double exposure image not Occurring.Methods: Human teeth were X-rayed with phosphor plates using tell different erasing times. Two observers evaluated the images for subjective Image quality (sharpness, brightness, contrast, enamel definition, dentin definition and dentin-enamal Junction definition) and for the presence or absence of double exposure image. Spearman's correlation analysis and ANOVA was performed to verify the existence ora linear association between the subjective image quality parameters and the alternative erasing times. A contingency table was constructed to evaluate the agreement among the observers, and a binominal logistic regression was performed to verify the correlation between the erasing time and the probability of double exposure image not occurring.Results: All 6 parameters or image quality were rated high by the examiners for the erasing times between 25 s and 130 s. The same erasing time range, from 25 to 130 s, was considered a safe erasing time interval, with no probability of a double exposure image Occurring.Conclusions: The alternative erasing times from 25 s to 130 s showed high quality and no probability of double image Occurrence. Thus, it is possible to reduce the operating time or the DenOptix (R) digital system Without jeopardizing the diagnostic task.Dentomaxillofacial Radiology (2010) 39, 23-27. doi: 10.1259/dmfr/49065239.