45 resultados para vector error correction model
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
In this article we examine the convenience of dollarization for Ecuador today. As Ecuador is strongly integrated financially and commercially with the United States, the exchange rate pass-through should be zero. However, we sustain that rising rates of imports from trade partners other than the United States and subsequent real effective exchange rate depreciations are causing the pass-through to move away from zero. Here, in the framework of the Vector Error Correction Model, we analyse the impulse response function and variance decomposition of the inflation variable. We show that the developing economy of Ecuador is importing inflation from its main trading partners, most of them emerging countries with appreciated currencies. We argue that if Ecuador recovered both its monetary and exchange rate instruments it would be able to fight against inflation. We believe such an analysis could be extended to other countries with pegged exchange rate regimes.
Resumo:
Using event-related brain potentials, the time course of error detection and correction was studied in healthy human subjects. A feedforward model of error correction was used to predict the timing properties of the error and corrective movements. Analysis of the multichannel recordings focused on (1) the error-related negativity (ERN) seen immediately after errors in response- and stimulus-locked averages and (2) on the lateralized readiness potential (LRP) reflecting motor preparation. Comparison of the onset and time course of the ERN and LRP components showed that the signs of corrective activity preceded the ERN. Thus, error correction was implemented before or at least in parallel with the appearance of the ERN component. Also, the amplitude of the ERN component was increased for errors, followed by fast corrective movements. The results are compatible with recent views considering the ERN component as the output of an evaluative system engaged in monitoring motor conflict.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
Based on an behavioral equilibrium exchange rate model, this paper examines the determinants of the real effective exchange rate and evaluates the degree of misalignment of a group of currencies since 1980. Within a panel cointegration setting, we estimate the relationship between exchange rate and a set of economic fundamentals, such as traded-nontraded productivity differentials and the stock of foreign assets. Having ascertained the variables are integrated and cointegrated, the long-run equilibrium value of the fundamentals are estimated and used to derive equilibrium exchange rates and misalignments. Although there is statistical homogeneity, some structural differences were found to exist between advanced and emerging economies.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
Una de les opcions que es contemplen per transmetre continguts multimèdia i proporcionar accés a Internet a grups de usuaris mòbils és fer servir satèl·lits. Les condiciones de propagació del canal mòbil impliquen que d'una manera o altra haurem de garantir la qualitat de servei. Això té fins i tot més importància si tenim en compte que, en el cas d'accés a Internet, no es té la capacitat d'assumir cert percentatge de pèrdua de dades que tenim, per exemple, en la transmissió de so o vídeo (rebaixant la qualitat). Entre les principals alternatives per a aquesta classe d’entorns es troba la inclusió de codificacions a nivell de paquet. El funcionament d'aquesta tècnica es basa en incloure a la transmissió paquets redundants, obtinguts mitjançant un determinat algoritme. El receptor podrà recuperar la informació original que es volia enviar, sempre que hagi rebut una certa quantitat de paquets, similar a la quantitat de paquets originals. A aquest mecanisme se'l coneix com Forward Error Correction (FEC) a nivell de paquet. En aquesta memòria es valoren breument les alternatives existents i s'expliquen algunes de les codificacions per a FEC més importants. A continuació es realitza un estudi compartiu d’algunes d'elles: les variants de LDPC (Low Density Parity Check) conegudes com LDGM (Low Density Generator Matrix), i la codificació Raptor
Resumo:
Este trabajo presenta un sistema para detectar y clasificar objetos binarios según la forma de éstos. En el primer paso del procedimiento, se aplica un filtrado para extraer el contorno del objeto. Con la información de los puntos de forma se obtiene un descriptor BSM con características altamente descriptivas, universales e invariantes. En la segunda fase del sistema se aprende y se clasifica la información del descriptor mediante Adaboost y Códigos Correctores de Errores. Se han usado bases de datos públicas, tanto en escala de grises como en color, para validar la implementación del sistema diseñado. Además, el sistema emplea una interfaz interactiva en la que diferentes métodos de procesamiento de imágenes pueden ser aplicados.
Resumo:
En aquest projecte s’ha analitzat i optimitzat l’enllaç satèl·lit amb avió per a un sistema aeronàutic global. Aquest nou sistema anomenat ANTARES està dissenyat per a comunicar avions amb estacions base mitjançant un satèl·lit. Aquesta és una iniciativa on hi participen institucions oficials en l’aviació com ara l’ECAC i que és desenvolupat en una col·laboració europea d’universitats i empreses. El treball dut a terme en el projecte compren bàsicament tres aspectes. El disseny i anàlisi de la gestió de recursos. La idoneïtat d’utilitzar correcció d’errors en la capa d’enllaç i en cas que sigui necessària dissenyar una opció de codificació preliminar. Finalment, estudiar i analitzar l’efecte de la interferència co-canal en sistemes multifeix. Tots aquests temes són considerats només per al “forward link”. L’estructura que segueix el projecte és primer presentar les característiques globals del sistema, després centrar-se i analitzar els temes mencionats per a poder donar resultats i extreure conclusions.
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
In this article we present a hybrid approach for automatic summarization of Spanish medical texts. There are a lot of systems for automatic summarization using statistics or linguistics, but only a few of them combining both techniques. Our idea is that to reach a good summary we need to use linguistic aspects of texts, but as well we should benefit of the advantages of statistical techniques. We have integrated the Cortex (Vector Space Model) and Enertex (statistical physics) systems coupled with the Yate term extractor, and the Disicosum system (linguistics). We have compared these systems and afterwards we have integrated them in a hybrid approach. Finally, we have applied this hybrid system over a corpora of medical articles and we have evaluated their performances obtaining good results.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
This paper presents a model of the Stokes emission vector from the ocean surface. The ocean surface is described as an ensemble of facets with Cox and Munk's (1954) Gram-Charlier slope distribution. The study discusses the impact of different up-wind and cross-wind rms slopes, skewness, peakedness, foam cover models and atmospheric effects on the azimuthal variation of the Stokes vector, as well as the limitations of the model. Simulation results compare favorably, both in mean value and azimuthal dependence, with SSM/I data at 53° incidence angle and with JPL's WINDRAD measurements at incidence angles from 30° to 65°, and at wind speeds from 2.5 to 11 m/s.