896 resultados para Information Search and Filtering
                                
Resumo:
Purpose – This paper aims to make a comparison, different from existing literature solely focusing on voluntary earnings forecasts and ex post earnings surprise, between the effects of mandatory earnings surprise warnings and voluntary information disclosure issued by management teams on financial analysts in terms of the number of followings and the accuracy of earnings forecasts. Design/methodology/approach – This paper uses panel data analysis with fixed effects on data collected from Chinese public firms between 2006 and 2010. It uses an exogenous regulation enforcement to minimise the endogeneity problem. Findings – This paper finds that financial analysts are less likely to follow firms which mandatorily issue earnings surprise warnings ex ante than those voluntarily issue earnings forecasts. Moreover, ex post, they issue less accurate and more dispersed forecasts on former firms. The results support Brown et al.’s (2009) finding in the USA and suggest that the earnings surprise warnings affect information asymmetries. Practical implications – This paper justifies the mandatory earnings surprise warnings policy issued by Chinese Securities Regulatory Commission in 2006. Originality/value – Mandatory earnings surprise is a unique practical regulation for publicly listed firms in China. This paper, for the first time, provides empirical evaluation on the effectiveness of a mandatory information disclosure policy in China. Consistent with existing literature on information disclosure by public firms in other countries, this paper finds that, in China, voluntary information disclosure captures more private information than mandatory information disclosure on corporate earnings ability.
                                
Resumo:
We examine the impact of accounting quality, used as a proxy for information risk, on the behavior of equity implied volatility around quarterly earnings announcements. Using US data during 1996–2010, we observe that lower (higher) accounting quality significantly relates to higher (lower) levels of implied volatility (IV) around announcements. Worse accounting quality is further associated with a significant increase in IV before announcements, and is found to relate to a larger resolution in IV after the announcement has taken place. We interpret our findings as indicative of information risk having a significant impact on implied volatility behavior around earnings announcements.
                                
                                
Resumo:
This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.
                                
Resumo:
The paper investigates which of Shannon’s measures (entropy, conditional entropy, mutual information) is the right one for the task of quantifying information flow in a programming language. We examine earlier relevant contributions from Denning, McLean and Gray and we propose and motivate a specific quantitative definition of information flow. We prove results relating equivalence relations, interference of program variables, independence of random variables and the flow of confidential information. Finally, we show how, in our setting, Shannon’s Perfect Secrecy theorem provides a sufficient condition to determine whether a program leaks confidential information.
                                
Resumo:
In the past few years, libraries have started to design public programs that educate patrons about different tools and techniques to protect personal privacy. But do end user solutions provide adequate safeguards against surveillance by corporate and government actors? What does a comprehensive plan for privacy entail in order that libraries live up to their privacy values? In this paper, the authors discuss the complexity of surveillance architecture that the library institution might confront when seeking to defend the privacy rights of patrons. This architecture consists of three main parts: physical or material aspects, logical characteristics, and social factors of information and communication flows in the library setting. For each category, the authors will present short case studies that are culled from practitioner experience, research, and public discourse. The case studies probe the challenges faced by the library—not only when making hardware and software choices, but also choices related to staffing and program design. The paper shows that privacy choices intersect not only with free speech and chilling effects, but also with questions that concern intellectual property, organizational development, civic engagement, technological innovation, public infrastructure, and more. The paper ends with discussion of what libraries will require in order to sustain and improve efforts to serve as stewards of privacy in the 21st century.
                                
Resumo:
GCM outputs such as CMIP3 are available via network access to PCMDI web site. Meteorological researchers are familiar with the usage of the GCM data, but the most of researchers other than meteorology such as agriculture, civil engineering, etc., and general people are not familiar with the GCM. There are some difficulties to use GCM; 1) to download the enormous quantity of data, 2) to understand the GCM methodology, parameters and grids. In order to provide a quick access way to GCM, Climate Change Information Database has been developed. The purpose of the database is to bridge the users and meteorological specialists and to facilitate the understanding the climate changes. The resolution of the data is unified, and climate change amount or factors for each meteorological element are provided from the database. All data in the database are interpolated on the same 80km mesh. Available data are the present-future projections of 27 GCMs, 16 meteorological elements (precipitation, temperature, etc.), 3 emission scenarios (A1B, A2, B1). We showed the summary of this database to residents in Toyama prefecture and measured the effect of showing and grasped the image for the climate change by using the Internet questionary survey. The persons who feel a climate change at the present tend to feel the additional changes in the future. It is important to show the monitoring results of climate change for a citizen and promote the understanding for the climate change that had already occurred. It has been shown that general images for the climate change promote to understand the need of the mitigation, and that it is important to explain about the climate change that might occur in the future even if it did not occur at the present in order to have people recognize widely the need of the adaptation.
                                
Resumo:
This thesis provides three original contributions to the field of Decision Sciences. The first contribution explores the field of heuristics and biases. New variations of the Cognitive Reflection Test (CRT--a test to measure "the ability or disposition to resist reporting the response that first comes to mind"), are provided. The original CRT (S. Frederick [2005] Journal of Economic Perspectives, v. 19:4, pp.24-42) has items in which the response is immediate--and erroneous. It is shown that by merely varying the numerical parameters of the problems, large deviations in response are found. Not only the final results are affected by the proposed variations, but so is processing fluency. It seems that numbers' magnitudes serve as a cue to activate system-2 type reasoning. The second contribution explores Managerial Algorithmics Theory (M. Moldoveanu [2009] Strategic Management Journal, v. 30, pp. 737-763); an ambitious research program that states that managers display cognitive choices with a "preference towards solving problems of low computational complexity". An empirical test of this hypothesis is conducted, with results showing that this premise is not supported. A number of problems are designed with the intent of testing the predictions from managerial algorithmics against the predictions of cognitive psychology. The results demonstrate (once again) that framing effects profoundly affect choice, and (an original insight) that managers are unable to distinguish computational complexity problem classes. The third contribution explores a new approach to a computationally complex problem in marketing: the shelf space allocation problem (M-H Yang [2001] European Journal of Operational Research, v. 131, pp.107--118). A new representation for a genetic algorithm is developed, and computational experiments demonstrate its feasibility as a practical solution method. These studies lie at the interface of psychology and economics (with bounded rationality and the heuristics and biases programme), psychology, strategy, and computational complexity, and heuristics for computationally hard problems in management science.
                                
Resumo:
Esta tese se dedica ao estudo de modelos de fixação de preços e suas implicações macroeconômicas. Nos primeiros dois capítulos analiso modelos em que as decisões das firmas sobre seus preços praticados levam em conta custos de menu e de informação. No Capítulo 1 eu estimo tais modelos empregando estatísticas de variações de preços dos Estados Unidos, e concluo que: os custos de informação são significativamente maiores que os custos de menu; os dados claramente favorecem o modelo em que informações sobre condições agregadas são custosas enquanto que as idiossincráticas têm custo zero. No Capítulo 2 investigo as consequências de choques monetários e anúncios de desinflação usando os modelos previamente estimados. Mostro que o grau de não-neutralidade monetária é maior no modelo em que parte da informação é grátis. O Capítulo 3 é um artigo em conjunto com Carlos Carvalho (PUC-Rio) e Antonella Tutino (Federal Reserve Bank of Dallas). No artigo examinamos um modelo de fixação de preços em que firmas estão sujeitas a uma restrição de fluxo de informação do tipo Shannon. Calibramos o modelo e estudamos funções impulso-resposta a choques idiossincráticos e agregados. Mostramos que as firmas vão preferir processar informações agregadas e idiossincráticas conjuntamente ao invés de investigá-las separadamente. Este tipo de processamento gera ajustes de preços mais frequentes, diminuindo a persistência de efeitos reais causados por choques monetários.
WVIS 2014 - 5th Workshop on Visual Analytics, Information Visualization and Scientific Visualization
                                
                                
Resumo:
In this article we use factor models to describe a certain class of covariance structure for financiaI time series models. More specifical1y, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. We build on previous work by allowing the factor loadings, in the factor mo deI structure, to have a time-varying structure and to capture changes in asset weights over time motivated by applications with multi pIe time series of daily exchange rates. We explore and discuss potential extensions to the models exposed here in the prediction area. This discussion leads to open issues on real time implementation and natural model comparisons.
                                
Resumo:
The past decade has wítenessed a series of (well accepted and defined) financial crises periods in the world economy. Most of these events aI,"e country specific and eventually spreaded out across neighbor countries, with the concept of vicinity extrapolating the geographic maps and entering the contagion maps. Unfortunately, what contagion represents and how to measure it are still unanswered questions. In this article we measure the transmission of shocks by cross-market correlation\ coefficients following Forbes and Rigobon's (2000) notion of shift-contagion,. Our main contribution relies upon the use of traditional factor model techniques combined with stochastic volatility mo deIs to study the dependence among Latin American stock price indexes and the North American indexo More specifically, we concentrate on situations where the factor variances are modeled by a multivariate stochastic volatility structure. From a theoretical perspective, we improve currently available methodology by allowing the factor loadings, in the factor model structure, to have a time-varying structure and to capture changes in the series' weights over time. By doing this, we believe that changes and interventions experienced by those five countries are well accommodated by our models which learns and adapts reasonably fast to those economic and idiosyncratic shocks. We empirically show that the time varying covariance structure can be modeled by one or two common factors and that some sort of contagion is present in most of the series' covariances during periods of economical instability, or crisis. Open issues on real time implementation and natural model comparisons are thoroughly discussed.
                                
                                
Resumo:
This paper evaluates how information asymmetry affects the strength of competition in credit markets. A theory is presented in which adverse selection softens competition by decreasing the incentives creditors have for competing in the interest rate dimension. In equilibirum, although creditors compete, the outcome is similar to collusion. Three empirical implications arise. First, interest rate should respond asymmetrically to changes in the cost of funds: increases in cost of funds should, on average, have a larger effect on interest rates than decreases. Second, aggressiveness in pricing should be associated with a worseing in the bank level default rates. Third, bank level default rates should be endogenous. We then verify the validity of these three empirical implications using Brazilian data on consumer overdraft loans. The results in this paper rationalize seemingly abnormallly high interest rates in unsecured loans.
                                
Resumo:
We study the effect of social embeddedness on voter turnout by investigating the role of information about other voters’ decisions. We do so in a participation game, where some voters (‘receivers’) are told about some other voters’ (‘senders’) turnout decision at a first stage of the game. Cases are distinguished where the voters support the same or different candidates or where they are uncertain about each other’s preferences. Our experimental results show that such information matters. Participation is much higher when information is exchanged than when it is not. Senders strategically try to use their first mover position and some receivers respond to this.
 
                    