886 resultados para real option analysis
Resumo:
One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…
Resumo:
This paper provides updated empirical evidence about the real and nominal effects of monetary policy in Italy, by using structural VAR analysis. We discuss different empirical approaches that have been used in order to identify monetary policy exogenous shocks. We argue that the data support the view that the Bank of Italy, at least in the recent past, has been targeting the rate on overnight interbank loans. Therefore, we interpret shocks to the overnight rate as purely exogenous monetary policy shocks and study how different macroeconomic variables react to such shocks.
Resumo:
The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes.
Resumo:
This paper presents a comparative analysis of linear and mixed modelsfor short term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay.The series is interpolated with a linear predictor which minimizes theforecast mean square error. The linear models are seasonal ARIMA models and themixed models have a linear component and a non linear seasonal component.The non linear component is estimated by a non parametric regression of dataversus time. Short term forecasts, no more than two days ahead, are of interestbecause they can be used by the port authorities to notice the fleet.Several models are fitted and compared by their forecasting behavior.
Resumo:
I discuss several lessons regarding the design and conduct of monetary policy that have emerged out of the New Keynesian research program. Those lessons include the bene.ts of price stability, the gains from commitment about future policies, the importance of nat-ural variables as benchmarks for policy, and the bene.ts of a credible anti-inflationary stance. I also point to one challenge facing NK modelling efforts: the need to come up with relevant sources of policy tradeoffs. A potentially useful approach to meeting that challenge, based on the introduction of real imperfections, is presented.
Resumo:
We develop and estimate a structural model of inflation that allowsfor a fraction of firms that use a backward looking rule to setprices. The model nests the purely forward looking New KeynesianPhillips curve as a particular case. We use measures of marginalcosts as the relevant determinant of inflation, as the theorysuggests, instead of an ad-hoc output gap. Real marginal costsare a significant and quantitatively important determinant ofinflation. Backward looking price setting, while statisticallysignificant, is not quantitatively important. Thus, we concludethat the New Keynesian Phillips curve provides a good firstapproximation to the dynamics of inflation.
Resumo:
The case of two transition tables is considered, that is two squareasymmetric matrices of frequencies where the rows and columns of thematrices are the same objects observed at three different timepoints. Different ways of visualizing the tables, either separatelyor jointly, are examined. We generalize an existing idea where asquare matrix is descomposed into symmetric and skew-symmetric partsto two matrices, leading to a decomposition into four components: (1)average symmetric, (2) average skew-symmetric, (3) symmetricdifference from average, and (4) skew-symmetric difference fromaverage. The method is illustrated with an artificial example and anexample using real data from a study of changing values over threegenerations.
Resumo:
Ney is an end-blown flute which is mainly used for Makam music. Although from the beginning of 20th century a score representation based on extending the Western musicis used, because of its rich articulation repertoire, actualNey music can not be totally represented by written score.Ney is still taught and transmitted orally in Turkey. Becauseof that the performance has a distinct and importantrole in Ney music. Therefore signal analysis of ney performancesis crucial for understanding the actual music.Another important aspect which is also a part of the performanceis the articulations that performers apply. In Makam music in Turkey none of the articulations are taught evennamed by teachers. Articulations in Ney are valuable for understanding the real performance. Since articulations are not taught and their places are not marked in the score, the choice and character of the articulation is unique for eachperformer which also makes each performance unique.Our method analyzes audio files of well known Turkish Ney players. In order to obtain our analysis data, we analyzed audio files of 8 different performers vary from 1920to 2000.
Resumo:
Accurate detection of subpopulation size determinations in bimodal populations remains problematic yet it represents a powerful way by which cellular heterogeneity under different environmental conditions can be compared. So far, most studies have relied on qualitative descriptions of population distribution patterns, on population-independent descriptors, or on arbitrary placement of thresholds distinguishing biological ON from OFF states. We found that all these methods fall short of accurately describing small population sizes in bimodal populations. Here we propose a simple, statistics-based method for the analysis of small subpopulation sizes for use in the free software environment R and test this method on real as well as simulated data. Four so-called population splitting methods were designed with different algorithms that can estimate subpopulation sizes from bimodal populations. All four methods proved more precise than previously used methods when analyzing subpopulation sizes of transfer competent cells arising in populations of the bacterium Pseudomonas knackmussii B13. The methods' resolving powers were further explored by bootstrapping and simulations. Two of the methods were not severely limited by the proportions of subpopulations they could estimate correctly, but the two others only allowed accurate subpopulation quantification when this amounted to less than 25% of the total population. In contrast, only one method was still sufficiently accurate with subpopulations smaller than 1% of the total population. This study proposes a number of rational approximations to quantifying small subpopulations and offers an easy-to-use protocol for their implementation in the open source statistical software environment R.
Resumo:
Several ink dating methods based on solvents analysis using gas chromatography/mass spectrometry (GC/MS) were proposed in the last decades. These methods follow the drying of solvents from ballpoint pen inks on paper and seem very promising. However, several questions arose over the last few years among questioned documents examiners regarding the transparency and reproducibility of the proposed techniques. These questions should be carefully studied for accurate and ethical application of this methodology in casework. Inspired by a real investigation involving ink dating, the present paper discusses this particular issue throughout four main topics: aging processes, dating methods, validation procedures and data interpretation. This work presents a wide picture of the ink dating field, warns about potential shortcomings and also proposes some solutions to avoid reporting errors in court.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count nonculturableor non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescencemicroscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the ''impaction on nutrient agar'' method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria. [Authors]
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
BACKGROUND: Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. RESULTS: We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. CONCLUSIONS: Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the 'limma' method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method.
Resumo:
The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.