105 resultados para Spectral Method
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
The Person Trade-Off (PTO) is a methodology aimed at measuring thesocial value of health states. The rest of methodologies would measure individualutility and would be less appropriate for taking resource allocation decisions.However few studies have been conducted to test the validity of the method.We present a pilot study with this objective. The study is based on theresult of interviews to 30 undergraduate students in Economics. We judgethe validity of PTO answers by their adequacy to three hypothesis of rationality.First, we show that, given certain rationality assumptions, PTO answersshould be predicted from answers to Standard Gamble questions. This firsthypothesis is not verified. The second hypothesis is that PTO answersshould not vary with different frames of equivalent PTO questions. Thissecond hypothesis is also not verified. Our third hypothesis is that PTOvalues should predict social preferences for allocating resources betweenpatients. This hypothesis is verified. The evidence on the validity of themethod is then conflicting.
Resumo:
Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.
Resumo:
This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
The experiential sampling method (ESM) was used to collect data from 74 parttimestudents who described and assessed the risks involved in their current activitieswhen interrupted at random moments by text messages. The major categories ofperceived risk were short-term in nature and involved loss of time or materials relatedto work and physical damage (e.g., from transportation). Using techniques of multilevelanalysis, we demonstrate effects of gender, emotional state, and types of risk onassessments of risk. Specifically, females do not differ from males in assessing thepotential severity of risks but they see these as more likely to occur. Also, participantsassessed risks to be lower when in more positive self-reported emotional states. Wefurther demonstrate the potential of ESM by showing that risk assessments associatedwith current actions exceed those made retrospectively. We conclude by notingadvantages and disadvantages of ESM for collecting data about risk perceptions.
Resumo:
We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
This paper describes an optimized model to support QoS by mean of Congestion minimization on LSPs (Label Switching Path). In order to perform this model, we start from a CFA (Capacity and Flow Allocation) model. As this model does not consider the buffer size to calculate the capacity cost, our model- named BCA (Buffer Capacity Allocation)- take into account this issue and it improve the CFA performance. To test our proposal, we perform several simulations; results show that BCA model minimizes LSP congestion and uniformly distributes flows on the network
Resumo:
This letter discusses the detection and correction ofresidual motion errors that appear in airborne synthetic apertureradar (SAR) interferograms due to the lack of precision in the navigationsystem. As it is shown, the effect of this lack of precision istwofold: azimuth registration errors and phase azimuth undulations.Up to now, the correction of the former was carried out byestimating the registration error and interpolating, while the latterwas based on the estimation of the phase azimuth undulations tocompensate the phase of the computed interferogram. In this letter,a new correction method is proposed, which avoids the interpolationstep and corrects at the same time the azimuth phase undulations.Additionally, the spectral diversity technique, used to estimateregistration errors, is critically analyzed. Airborne L-bandrepeat-pass interferometric data of the German Aerospace Center(DLR) experimental airborne SAR is used to validate the method
Resumo:
We construct spectral sequences in the framework of Baues-Wirsching cohomology and homology for functors between small categories and analyze particular cases including Grothendieck fibrations. We also give applications to more classical cohomology and homology theories including Hochschild-Mitchell cohomology and those studied before by Watts, Roos, Quillen and others