138 resultados para Strictly Hyperbolic Polynomial
Resumo:
Background: We address the problem of studying recombinational variations in (human) populations. In this paper, our focus is on one computational aspect of the general task: Given two networks G1 and G2, with both mutation and recombination events, defined on overlapping sets of extant units the objective is to compute a consensus network G3 with minimum number of additional recombinations. We describe a polynomial time algorithm with a guarantee that the number of computed new recombination events is within ϵ = sz(G1, G2) (function sz is a well-behaved function of the sizes and topologies of G1 and G2) of the optimal number of recombinations. To date, this is the best known result for a network consensus problem.Results: Although the network consensus problem can be applied to a variety of domains, here we focus on structure of human populations. With our preliminary analysis on a segment of the human Chromosome X data we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. These results have been verified independently using traditional manual procedures. To the best of our knowledge, this is the first recombinations-based characterization of human populations. Conclusion: We show that our mathematical model identifies recombination spots in the individual haplotypes; the aggregate of these spots over a set of haplotypes defines a recombinational landscape that has enough signal to detect continental as well as population divide based on a short segment of Chromosome X. In particular, we are able to infer ancient recombinations, population-specific recombinations and more, which also support the widely accepted 'Out of Africa' model. The agreement with mutation-based analysis can be viewed as an indirect validation of our results and the model. Since the model in principle gives us more information embedded in the networks, in our future work, we plan to investigate more non-traditional questions via these structures computed by our methodology.
Resumo:
The pseudo-spectral time-domain (PSTD) method is an alternative time-marching method to classicalleapfrog finite difference schemes in the simulation of wave-like propagating phenomena. It is basedon the fundamentals of the Fourier transform to compute the spatial derivatives of hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acoustics simulations. However, one of the first issues to be solved consists on modeling wallabsorption. Unfortunately, there are no references in the technical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals to overcome this problem are presented, validated and compared to analytical solutions in different scenarios.
Resumo:
The Pseudo-Spectral Time Domain (PSTD) method is an alternative time-marching method to classical leapfrog finite difference schemes inthe simulation of wave-like propagating phenomena. It is based on the fundamentals of the Fourier transform to compute the spatial derivativesof hyperbolic differential equations. Therefore, it results in an isotropic operator that can be implemented in an efficient way for room acousticssimulations. However, one of the first issues to be solved consists on modeling wall absorption. Unfortunately, there are no references in thetechnical literature concerning to that problem. In this paper, assuming real and constant locally reacting impedances, several proposals toovercome this problem are presented, validated and compared to analytical solutions in different scenarios.
Resumo:
In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
In this paper we propose a subsampling estimator for the distribution ofstatistics diverging at either known rates when the underlying timeseries in strictly stationary abd strong mixing. Based on our results weprovide a detailed discussion how to estimate extreme order statisticswith dependent data and present two applications to assessing financialmarket risk. Our method performs well in estimating Value at Risk andprovides a superior alternative to Hill's estimator in operationalizingSafety First portofolio selection.
Resumo:
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper'framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework canactually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers.
Resumo:
Most US credit card holders revolve high-interest debt, often combined with substantial (i) asset accumulation by retirement, and (ii) low-rate liquid assets. Hyperbolic discounting can resolve only the former puzzle (Laibson et al., 2003). Bertaut and Haliassos (2002) proposed an 'accountant-shopper'framework for the latter. The current paper builds, solves, and simulates a fully-specified accountant-shopper model, to show that this framework canactually generate both types of co-existence, as well as target credit card utilization rates consistent with Gross and Souleles (2002). The benchmark model is compared to setups without self-control problems, with alternative mechanisms, and with impatient but fully rational shoppers.
Resumo:
We extend Aumann's theorem [Aumann 1987], deriving correlated equilibria as a consequence of common priors and common knowledge of rationality, by explicitly allowing for non-rational behavior. Wereplace the assumption of common knowledge of rationality with a substantially weaker one, joint p-belief of rationality, where agents believe the other agents are rational with probability p or more. We show that behavior in this case constitutes a kind of correlated equilibrium satisfying certain p-belief constraints, and that it varies continuously in the parameters p and, for p sufficiently close to one,with high probability is supported on strategies that survive the iterated elimination of strictly dominated strategies. Finally, we extend the analysis to characterizing rational expectations of interimtypes, to games of incomplete information, as well as to the case of non-common priors.
Resumo:
Stare decisis allows common law to develop gradually and incrementally. We show howjudge-made law can steadily evolve and tend to increase efficiency even in the absence ofnew information. Judges' opinions must argue that their decisions are consistent withprecedent: this is the more costly, the greater the innovation they are introducing. As aresult, each judge effects a cautious marginal change in the law. Alternative models inwhich precedents are either strictly obeyed or totally discarded would instead predictabrupt large swings in legal rules. Thus we find that the evolution of case law isgrounded not in binary logic fixing judges' constraints, but in costly rhetoric shapingtheir incentives. We apply this finding to an assessment of the role of analogicalreasoning in shaping the joint development of different areas of law.
Resumo:
A `next' operator, s, is built on the set R1=(0,1]-{ 1-1/e} defining a partial order that, with the help of the axiom of choice, can be extended to a total order in R1. Besides, the orbits {sn(a)}nare all dense in R1 and are constituted by elements of the samearithmetical character: if a is an algebraic irrational of degreek all the elements in a's orbit are algebraic of degree k; if a istranscendental, all are transcendental. Moreover, the asymptoticdistribution function of the sequence formed by the elements in anyof the half-orbits is a continuous, strictly increasing, singularfunction very similar to the well-known Minkowski's ?(×) function.
Resumo:
Traditional economic wisdom says that free entry in a market will drive profits down to zero. This conclusion is usually drawn under the assumption of perfect information. We assumethat a priori there exists imperfect information about theprofitability of the market, but that potential entrants maylearn the demand curve perfectly at negligible cost byengaging in market research. Even if in equilibrium firmslearn the demand perfectly, profits may be strictly positivebecause of insufficient entry. The mere fact that it will notbecome common knowledge that every entrant has perfectinformation about demand causes this surprising result. Belief means doubt. Knowing means certainty. Introduction to the Kabalah.
Resumo:
That individuals contribute in social dilemma interactions even when contributing is costly is a well-established observation in the experimental literature. Since a contributor is always strictly worse off than a non-contributor the question is raised if an intrinsic motivation to contribute can survive in an evolutionary setting. Using recent results on deterministic approximation of stochastic evolutionary dynamics we give conditions for equilibria with a positive number of contributors to be selected in the long run.
Resumo:
We perform an experiment on a pure coordination game with uncertaintyabout the payoffs. Our game is closely related to models that have beenused in many macroeconomic and financial applications to solve problemsof equilibrium indeterminacy. In our experiment each subject receives anoisy signal about the true payoffs. This game has a unique strategyprofile that survives the iterative deletion of strictly dominatedstrategies (thus a unique Nash equilibrium). The equilibrium outcomecoincides, on average, with the risk-dominant equilibrium outcome ofthe underlying coordination game. The behavior of the subjects convergesto the theoretical prediction after enough experience has been gained. The data (and the comments) suggest that subjects do not apply through"a priori" reasoning the iterated deletion of dominated strategies.Instead, they adapt to the responses of other players. Thus, the lengthof the learning phase clearly varies for the different signals. We alsotest behavior in a game without uncertainty as a benchmark case. The gamewith uncertainty is inspired by the "global" games of Carlsson and VanDamme (1993).
Resumo:
In 1952 F. Riesz and Sz.Nágy published an example of a monotonic continuous function whose derivative is zero almost everywhere, that is to say, a singular function. Besides, the function was strictly increasing. Their example was built as the limit of a sequence of deformations of the identity function. As an easy consequence of the definition, the derivative, when it existed and was finite, was found to be zero. In this paper we revisit the Riesz-N´agy family of functions and we relate it to a system for real numberrepresentation which we call (t, t-1) expansions. With the help of these real number expansions we generalize the family. The singularity of the functions is proved through some metrical properties of the expansions used in their definition which also allows us to give a more precise way of determining when the derivative is 0 or infinity.