996 resultados para quantile function
Resumo:
Quantile functions are efficient and equivalent alternatives to distribution functions in modeling and analysis of statistical data (see Gilchrist, 2000; Nair and Sankaran, 2009). Motivated by this, in the present paper, we introduce a quantile based Shannon entropy function. We also introduce residual entropy function in the quantile setup and study its properties. Unlike the residual entropy function due to Ebrahimi (1996), the residual quantile entropy function determines the quantile density function uniquely through a simple relationship. The measure is used to define two nonparametric classes of distributions
Resumo:
Di Crescenzo and Longobardi (2002) introduced a measure of uncertainty in past lifetime distributions and studied its relationship with residual entropy function. In the present paper, we introduce a quantile version of the entropy function in past lifetime and study its properties. Unlike the measure of uncertainty given in Di Crescenzo and Longobardi (2002) the proposed measure uniquely determines the underlying probability distribution. The measure is used to study two nonparametric classes of distributions. We prove characterizations theorems for some well known quantile lifetime distributions
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
In the present paper, we introduce a quantile based Rényi’s entropy function and its residual version. We study certain properties and applications of the measure. Unlike the residual Rényi’s entropy function, the quantile version uniquely determines the distribution
Resumo:
Partial moments are extensively used in literature for modeling and analysis of lifetime data. In this paper, we study properties of partial moments using quantile functions. The quantile based measure determines the underlying distribution uniquely. We then characterize certain lifetime quantile function models. The proposed measure provides alternate definitions for ageing criteria. Finally, we explore the utility of the measure to compare the characteristics of two lifetime distributions
Resumo:
Partial moments are extensively used in actuarial science for the analysis of risks. Since the first order partial moments provide the expected loss in a stop-loss treaty with infinite cover as a function of priority, it is referred as the stop-loss transform. In the present work, we discuss distributional and geometric properties of the first and second order partial moments defined in terms of quantile function. Relationships of the scaled stop-loss transform curve with the Lorenz, Gini, Bonferroni and Leinkuhler curves are developed
Resumo:
For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.
Resumo:
This paper analyzes the measure of systemic importance ∆CoV aR proposed by Adrian and Brunnermeier (2009, 2010) within the context of a similar class of risk measures used in the risk management literature. In addition, we develop a series of testing procedures, based on ∆CoV aR, to identify and rank the systemically important institutions. We stress the importance of statistical testing in interpreting the measure of systemic importance. An empirical application illustrates the testing procedures, using equity data for three European banks.
Resumo:
This thesis is composed of three essays referent to the subjects of macroeconometrics and Önance. In each essay, which corresponds to one chapter, the objective is to investigate and analyze advanced econometric techniques, applied to relevant macroeconomic questions, such as the capital mobility hypothesis and the sustainability of public debt. A Önance topic regarding portfolio risk management is also investigated, through an econometric technique used to evaluate Value-at-Risk models. The Örst chapter investigates an intertemporal optimization model to analyze the current account. Based on Campbell & Shillerís (1987) approach, a Wald test is conducted to analyze a set of restrictions imposed to a VAR used to forecast the current account. The estimation is based on three di§erent procedures: OLS, SUR and the two-way error decomposition of Fuller & Battese (1974), due to the presence of global shocks. A note on Granger causality is also provided, which is shown to be a necessary condition to perform the Wald test with serious implications to the validation of the model. An empirical exercise for the G-7 countries is presented, and the results substantially change with the di§erent estimation techniques. A small Monte Carlo simulation is also presented to investigate the size and power of the Wald test based on the considered estimators. The second chapter presents a study about Öscal sustainability based on a quantile autoregression (QAR) model. A novel methodology to separate periods of nonstationarity from stationary ones is proposed, which allows one to identify trajectories of public debt that are not compatible with Öscal sustainability. Moreover, such trajectories are used to construct a debt ceiling, that is, the largest value of public debt that does not jeopardize long-run Öscal sustainability. An out-of-sample forecast of such a ceiling is also constructed, and can be used by policy makers interested in keeping the public debt on a sustainable path. An empirical exercise by using Brazilian data is conducted to show the applicability of the methodology. In the third chapter, an alternative backtest to evaluate the performance of Value-at-Risk (VaR) models is proposed. The econometric methodology allows one to directly test the overall performance of a VaR model, as well as identify periods of an increased risk exposure, which seems to be a novelty in the literature. Quantile regressions provide an appropriate environment to investigate VaR models, since they can naturally be viewed as a conditional quantile function of a given return series. An empirical exercise is conducted for daily S&P500 series, and a Monte Carlo simulation is also presented, revealing that the proposed test might exhibit more power in comparison to other backtests.
Resumo:
We study a five-parameter lifetime distribution called the McDonald extended exponential model to generalize the exponential, generalized exponential, Kumaraswamy exponential and beta exponential distributions, among others. We obtain explicit expressions for the moments and incomplete moments, quantile and generating functions, mean deviations, Bonferroni and Lorenz curves and Gini concentration index. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. The applicability of the new model is illustrated by means of a real data set.
Resumo:
Abstract Background Regardless the regulatory function of microRNAs (miRNA), their differential expression pattern has been used to define miRNA signatures and to disclose disease biomarkers. To address the question of whether patients presenting the different types of diabetes mellitus could be distinguished on the basis of their miRNA and mRNA expression profiling, we obtained peripheral blood mononuclear cell (PBMC) RNAs from 7 type 1 (T1D), 7 type 2 (T2D), and 6 gestational diabetes (GDM) patients, which were hybridized to Agilent miRNA and mRNA microarrays. Data quantification and quality control were obtained using the Feature Extraction software, and data distribution was normalized using quantile function implemented in the Aroma light package. Differentially expressed miRNAs/mRNAs were identified using Rank products, comparing T1DxGDM, T2DxGDM and T1DxT2D. Hierarchical clustering was performed using the average linkage criterion with Pearson uncentered distance as metrics. Results The use of the same microarrays platform permitted the identification of sets of shared or specific miRNAs/mRNA interaction for each type of diabetes. Nine miRNAs (hsa-miR-126, hsa-miR-1307, hsa-miR-142-3p, hsa-miR-142-5p, hsa-miR-144, hsa-miR-199a-5p, hsa-miR-27a, hsa-miR-29b, and hsa-miR-342-3p) were shared among T1D, T2D and GDM, and additional specific miRNAs were identified for T1D (20 miRNAs), T2D (14) and GDM (19) patients. ROC curves allowed the identification of specific and relevant (greater AUC values) miRNAs for each type of diabetes, including: i) hsa-miR-1274a, hsa-miR-1274b and hsa-let-7f for T1D; ii) hsa-miR-222, hsa-miR-30e and hsa-miR-140-3p for T2D, and iii) hsa-miR-181a and hsa-miR-1268 for GDM. Many of these miRNAs targeted mRNAs associated with diabetes pathogenesis. Conclusions These results indicate that PBMC can be used as reporter cells to characterize the miRNA expression profiling disclosed by the different diabetes mellitus manifestations. Shared miRNAs may characterize diabetes as a metabolic and inflammatory disorder, whereas specific miRNAs may represent biological markers for each type of diabetes, deserving further attention.
Resumo:
This package includes various Mata functions. kern(): various kernel functions; kint(): kernel integral functions; kdel0(): canonical bandwidth of kernel; quantile(): quantile function; median(): median; iqrange(): inter-quartile range; ecdf(): cumulative distribution function; relrank(): grade transformation; ranks(): ranks/cumulative frequencies; freq(): compute frequency counts; histogram(): produce histogram data; mgof(): multinomial goodness-of-fit tests; collapse(): summary statistics by subgroups; _collapse(): summary statistics by subgroups; gini(): Gini coefficient; sample(): draw random sample; srswr(): SRS with replacement; srswor(): SRS without replacement; upswr(): UPS with replacement; upswor(): UPS without replacement; bs(): bootstrap estimation; bs2(): bootstrap estimation; bs_report(): report bootstrap results; jk(): jackknife estimation; jk_report(): report jackknife results; subset(): obtain subsets, one at a time; composition(): obtain compositions, one by one; ncompositions(): determine number of compositions; partition(): obtain partitions, one at a time; npartitionss(): determine number of partitions; rsubset(): draw random subset; rcomposition(): draw random composition; colvar(): variance, by column; meancolvar(): mean and variance, by column; variance0(): population variance; meanvariance0(): mean and population variance; mse(): mean squared error; colmse(): mean squared error, by column; sse(): sum of squared errors; colsse(): sum of squared errors, by column; benford(): Benford distribution; cauchy(): cumulative Cauchy-Lorentz dist.; cauchyden(): Cauchy-Lorentz density; cauchytail(): reverse cumulative Cauchy-Lorentz; invcauchy(): inverse cumulative Cauchy-Lorentz; rbinomial(): generate binomial random numbers; cebinomial(): cond. expect. of binomial r.v.; root(): Brent's univariate zero finder; nrroot(): Newton-Raphson zero finder; finvert(): univariate function inverter; integrate_sr(): univariate function integration (Simpson's rule); integrate_38(): univariate function integration (Simpson's 3/8 rule); ipolate(): linear interpolation; polint(): polynomial inter-/extrapolation; plot(): Draw twoway plot; _plot(): Draw twoway plot; panels(): identify nested panel structure; _panels(): identify panel sizes; npanels(): identify number of panels; nunique(): count number of distinct values; nuniqrows(): count number of unique rows; isconstant(): whether matrix is constant; nobs(): number of observations; colrunsum(): running sum of each column; linbin(): linear binning; fastlinbin(): fast linear binning; exactbin(): exact binning; makegrid(): equally spaced grid points; cut(): categorize data vector; posof(): find element in vector; which(): positions of nonzero elements; locate(): search an ordered vector; hunt(): consecutive search; cond(): matrix conditional operator; expand(): duplicate single rows/columns; _expand(): duplicate rows/columns in place; repeat(): duplicate contents as a whole; _repeat(): duplicate contents in place; unorder2(): stable version of unorder(); jumble2(): stable version of jumble(); _jumble2(): stable version of _jumble(); pieces(): break string into pieces; npieces(): count number of pieces; _npieces(): count number of pieces; invtokens(): reverse of tokens(); realofstr(): convert string into real; strexpand(): expand string argument; matlist(): display a (real) matrix; insheet(): read spreadsheet file; infile(): read free-format file; outsheet(): write spreadsheet file; callf(): pass optional args to function; callf_setup(): setup for mm_callf().
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In this paper we investigate a Bayesian procedure for the estimation of a flexible generalised distribution, notably the MacGillivray adaptation of the g-and-κ distribution. This distribution, described through its inverse cdf or quantile function, generalises the standard normal through extra parameters which together describe skewness and kurtosis. The standard quantile-based methods for estimating the parameters of generalised distributions are often arbitrary and do not rely on computation of the likelihood. MCMC, however, provides a simulation-based alternative for obtaining the maximum likelihood estimates of parameters of these distributions or for deriving posterior estimates of the parameters through a Bayesian framework. In this paper we adopt the latter approach, The proposed methodology is illustrated through an application in which the parameter of interest is slightly skewed.