128 resultados para Zero-one laws


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze the statistics of rain-event sizes, rain-event durations, and dry-spell durations in a network of 20 rain gauges scattered in an area situated close to the NW Mediterranean coast. Power-law distributions emerge clearly for the dryspell durations, with an exponent around 1.50 ± 0.05, although for event sizes and durations the power-law ranges are rather limited, in some cases. Deviations from power-law behavior are attributed to finite-size effects. A scaling analysis helps to elucidate the situation, providing support for the existence of scale invariance in these distributions. It is remarkable that rain data of not very high resolution yield findings in agreement with self-organized critical phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To measure the contribution of individual transactions inside the total risk of a credit portfolio is a major issue in financial institutions. VaR Contributions (VaRC) and Expected Shortfall Contributions (ESC) have become two popular ways of quantifying the risks. However, the usual Monte Carlo (MC) approach is known to be a very time consuming method for computing these risk contributions. In this paper we consider the Wavelet Approximation (WA) method for Value at Risk (VaR) computation presented in [Mas10] in order to calculate the Expected Shortfall (ES) and the risk contributions under the Vasicek one-factor model framework. We decompose the VaR and the ES as a sum of sensitivities representing the marginal impact on the total portfolio risk. Moreover, we present technical improvements in the Wavelet Approximation (WA) that considerably reduce the computational effort in the approximation while, at the same time, the accuracy increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What abortion laws a liberal political community ought to have? Much has been said about the moral problem of abortion, but there has not been yet (to my knowledge) a fully articulate account of the bearing of the competing answers to this ethical problem on liberal public reason. The first part of my project consists in a critical review of the different attempts to solve the various philosophical puzzles, both metaphysical and moral, posed by the abortion problem. Why is it wrong to kill beings like you and me? By answering this question we shall gain a better insight into those properties we have that give us such strong reasons against killing beings like us. Here we face a tremendous philosophical diffuculty, for it is not possible to determine what the robustest account of the wrongness of killing is without dealing with deeper metaethical and metaphysical problems. Indeed, consequentialist and nonconsequentialist moral theories differ in what it is that makes an action morally wrong -is it just the outcome of the action as compared with the outcomes of its alternatives? Or is it something else? Also, what are we essentially? Is the foetus merely our precursor? Then killing a foetus is relevantly similar to contraception. Or is the foetus one of us? If so, when we kill it, are we depriving it of a future as valuable as ours? Perhaps the relation of identity (the fact that it is its future as opposed to someone else's) doesn't matter. That may be because the foetus is an aggregate of biological and psychological facts and perhaps aggregates are not substances. Or maybe it is a substance but only psychological realtions matter, not personal identity. The second part of my project has to do with the different status these metaphisical and ethical positions ought to have in liberal public reason. Though this is the part in which most research is still needed, my own intuition is that, given the depth of the philosphical views in competition, restrictive abortion laws ought to be considered unrespectful to citizens' autonomy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El següent document detalla la implementació, des de zero, d'una aplicació J2EE. La correlació amb la realitat no ha estat un dels models a seguir, donat que l'objectiu bàsic es l'assemblatge de les diferents eines disponibles entorn al mon J2EE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La creació d'una ontologia partint de zero és una feina llarga i feixuga que pot simplificar-se si, partint d'una ontologia més general, es poden podar les parts que no formen part del domini de context. Aquesta memòria té una doble vessant: d'un costat l'estudi de l'estat de l'art de les ontologies (història, aplicacions, línies de treball, etc) i de l'altra l'anàlisi i disseny d'un plug-in en java per a Protégé que implementi l'algorisme de poda.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The R-package “compositions”is a tool for advanced compositional analysis. Its basicfunctionality has seen some conceptual improvement, containing now some facilitiesto work with and represent ilr bases built from balances, and an elaborated subsys-tem for dealing with several kinds of irregular data: (rounded or structural) zeroes,incomplete observations and outliers. The general approach to these irregularities isbased on subcompositions: for an irregular datum, one can distinguish a “regular” sub-composition (where all parts are actually observed and the datum behaves typically)and a “problematic” subcomposition (with those unobserved, zero or rounded parts, orelse where the datum shows an erratic or atypical behaviour). Systematic classificationschemes are proposed for both outliers and missing values (including zeros) focusing onthe nature of irregularities in the datum subcomposition(s).To compute statistics with values missing at random and structural zeros, a projectionapproach is implemented: a given datum contributes to the estimation of the desiredparameters only on the subcompositon where it was observed. For data sets withvalues below the detection limit, two different approaches are provided: the well-knownimputation technique, and also the projection approach.To compute statistics in the presence of outliers, robust statistics are adapted to thecharacteristics of compositional data, based on the minimum covariance determinantapproach. The outlier classification is based on four different models of outlier occur-rence and Monte-Carlo-based tests for their characterization. Furthermore the packageprovides special plots helping to understand the nature of outliers in the dataset.Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator,robustness, rounded zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative linguistics has provided us with a number of empirical laws that characterise the evolution of languages and competition amongst them. In terms of language usage, one of the most influential results is Zipf’s law of word frequencies. Zipf’s law appears to be universal, and may not even be unique to human language. However, there is ongoing controversy over whether Zipf’s law is a good indicator of complexity. Here we present an alternative approach that puts Zipf’s law in the context of critical phenomena (the cornerstone of complexity in physics) and establishes the presence of a large-scale “attraction” between successive repetitions of words. Moreover, this phenomenon is scale-invariant and universal – the pattern is independent of word frequency and is observed in texts by different authors and written in different languages. There is evidence, however, that the shape of the scaling relation changes for words that play a key role in the text, implying the existence of different “universality classes” in the repetition of words. These behaviours exhibit striking parallels with complex catastrophic phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the limits of discrete time repeated games with public monitoring. We solve and characterize the Abreu, Milgrom and Pearce (1991) problem. We found that for the "bad" ("good") news model the lower (higher) magnitude events suggest cooperation, i.e., zero punishment probability, while the highrt (lower) magnitude events suggest defection, i.e., punishment with probability one. Public correlation is used to connect these two sets of signals and to make the enforceability to bind. The dynamic and limit behavior of the punishment probabilities for variations in ... (the discount rate) and ... (the time interval) are characterized, as well as the limit payo¤s for all these scenarios (We also introduce uncertainty in the time domain). The obtained ... limits are to the best of my knowledge, new. The obtained ... limits coincide with Fudenberg and Levine (2007) and Fudenberg and Olszewski (2011), with the exception that we clearly state the precise informational conditions that cause the limit to converge from above, to converge from below or to degenerate. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Pub- lic Monitoring, Moral Hazard, Stochastic Processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper estimates the effect of piracy attacks on shipping costs using a unique data set on shipping contracts in the dry bulk market. We look at shipping routes whose shortest path exposes them to piracy attacks and find that the increase in attacks in 2008 lead to around a ten percent increase in shipping costs. We use this estimate to get a sense of the welfare loss imposed by piracy. Our intermediate estimate suggests that the creation of $120 million of revenue for pirates in the Somalia area led to a welfare loss of over $1.5 billion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a one-phase supercooled Stefan problem, with a nonlinear relation between the phase change temperature and front velocity, is analysed. The model with the standard linear approximation, valid for small supercooling, is first examined asymptotically. The nonlinear case is more difficult to analyse and only two simple asymptotic results are found. Then, we apply an accurate heat balance integral method to make further progress. Finally, we compare the results found against numerical solutions. The results show that for large supercooling the linear model may be highly inaccurate and even qualitatively incorrect. Similarly as the Stefan number β → 1&sup&+&/sup& the classic Neumann solution which exists down to β =1 is far from the linear and nonlinear supercooled solutions and can significantly overpredict the solidification rate.