111 resultados para Distance estimation
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.
Resumo:
Aquest projecte estudia la implantació d’un sistema de posicionament a l’interior d’un edifici que compti amb una xarxa WiMAX. Per començar, s’analitzaran les característiques principals d’aquesta tecnología, la configuració de la xarxa amb la que treballarem i s’explicaran els mètodes de posicionament existents avui en dia. Més endavant s’estudiaran els aspectes més importants de la nostra aplicació: l’escenari, l’estimació de la distancia i l’estimació de la posició. Finalment, després d’analitzar els resultats de diverses mesures, es dissenyaran tres mètodes pel càlcul de la posició i s’aplicarà el nostre procediment en més de 15 escenaris de posicionament diferents, amb l’objectiu de comparar els resultats i definir quin dels mètodes aconsegueix un posicionament més precís.
Resumo:
Let a class $\F$ of densities be given. We draw an i.i.d.\ sample from a density $f$ which may or may not be in $\F$. After every $n$, one must make a guess whether $f \in \F$ or not. A class is almost surely testable if there exists such a testing sequence such that for any $f$, we make finitely many errors almost surely. In this paper, several results are given that allowone to decide whether a class is almost surely testable. For example, continuity and square integrability are not testable, but unimodality, log-concavity, and boundedness by a given constant are.
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
[spa] Se presenta un nuevo modelo para la toma de decisiones basado en el uso de medidas de distancia y de operadores de agregación inducidos. Se introduce la distancia media ponderada ordenada inducida (IOWAD). Es un nuevo operador de agregación que extiende el operador OWA a través del uso de distancias y un proceso de reordenación de los argumentos basado en variables de ordenación inducidas. La principal ventaja el operador IOWAD es la posibilidad de utilizar una familia parametrizada de operadores de agregación entre la distancia individual máxima y la mínima. Se estudian algunas de sus principales propiedades y algunos casos particulares. Se desarrolla un ejemplo numérico en un problema de toma de decisiones sobre selección de inversiones. Se observa que la principal ventaja de este modelo en la toma de decisiones es la posibilidad de mostrar una visión más completa del proceso, de forma que el decisor está capacitado para seleccionar la alternativa que está más cerca de sus intereses.
Resumo:
[spa] Se presenta un nuevo modelo para la toma de decisiones basado en el uso de medidas de distancia y de operadores de agregación inducidos. Se introduce la distancia media ponderada ordenada inducida (IOWAD). Es un nuevo operador de agregación que extiende el operador OWA a través del uso de distancias y un proceso de reordenación de los argumentos basado en variables de ordenación inducidas. La principal ventaja el operador IOWAD es la posibilidad de utilizar una familia parametrizada de operadores de agregación entre la distancia individual máxima y la mínima. Se estudian algunas de sus principales propiedades y algunos casos particulares. Se desarrolla un ejemplo numérico en un problema de toma de decisiones sobre selección de inversiones. Se observa que la principal ventaja de este modelo en la toma de decisiones es la posibilidad de mostrar una visión más completa del proceso, de forma que el decisor está capacitado para seleccionar la alternativa que está más cerca de sus intereses.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
This paper analyzes both theoretically and empirically the relationship between distance and frequency of scheduled transportation services. We study the interaction between a monopoly firm providing high-speed scheduled service and personal trans- portation (i.e., car). Most interestingly, the carrier chooses to increase frequency of service on longer routes when competing with personal transportation because provid- ing a higher frequency (at extra cost) it can also charge higher fares that can boost its profits. However, when driving is not a relevant option, frequency of service de- creases for longer flights consistently with prior studies. An empirical application of our analysis to the European airline industry con?rms the predictions of our theoretical model.
Resumo:
This paper presents a theoretical and empirical analysis of the relationship be- tween frequency of scheduled transportation services and their substitutability with personal transportation (using distance as a proxy). We study the interaction between a monopoly firm providing a high-speed scheduled service and private transportation (i.e., car). Interestingly, the carrier chooses to increase the frequency of service on longer routes when competing with personal transportation because by providing higher frequency (at extra cost) it can also charge higher fares which can boost its profits. However, in line with the results of earlier studies, frequency decreases for longer flights when driving is not a viable option. An empirical application of our analysis to the European airline industry confirms the predictions of our theoretical model. Keywords: short-haul routes; long-haul routes; flight frequency; distance JEL Classification Numbers: L13; L2; L93
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
Properties of GMM estimators for panel data, which have become very popular in the empirical economic growth literature, are not well known when the number of individuals is small. This paper analyses through Monte Carlo simulations the properties of various GMM and other estimators when the number of individuals is the one typically available in country growth studies. It is found that, provided that some persistency is present in the series, the system GMM estimator has a lower bias and higher efficiency than all the other estimators analysed, including the standard first-differences GMM estimator.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
La Teoria de la Relativitat General preveu que quan un objecte massiu és sotmès a una certa acceleració en certes condicions ha d’emetre ones gravitacionals. Es tracta d’un tipus d’on altament energètica però que interacciona amb la matèria de manera molt feble i el seu punt d’emissió és força llunyà. Per la qual cosa la seva detecció és una tasca extraordinàriament complicada. Conseqüentment, la detecció d’aquestes ones es creu molt més factible utilitzant instruments situats a l’espai. Amb aquest objectiu, neis la missió LISA (Laser Interferometer Space Antenna). Es tracta aquesta d’una missió conjunta entre la NASA i l’ESA amb llançament previst per 2020-2025. Per reduir els riscs que comporta una primera utilització de tecnologia no testejada, unit a l’alt cost econòmic de la missió LISA. Aquesta missió contindrà instruments molt avançats: el LTP (LISA Technoplogy Package), desenvolupat per la Unió Europea, que provarà la tecnologia de LISA i el Drag Free flying system, que s’encarregarà de provar una sèrie de propulsors (thrusters) utilitzats per al control d’actitud i posició de satèl•lit amb precisió de nanòmetres. Particularment, el LTP, està composat per dues masses de prova separades per 35 centímetres, i d’un interferòmetre làser que mesura la variació de la distància relativa entre elles. D’aquesta manera, el LTP mesurarà les prestacions dels equips i les possibles interferències que afecten a la mesura. Entre les fonts de soroll es troben, entre d’altres, el vent i pressió de radiació solar, les càrregues electrostàtiques, el gradient tèrmic, les fluctuacions de voltatge o les forces internes. Una de les possibles causes de soroll és aquella que serà l’objecte d’estudi en aquest projecte de tesi doctoral: la presència dintre del LTP de camps magnètics, que exerceixen una força sobre les masses de prova, la seva estimació i el seu control, prenent en compte les caracterírstiques magnètiques de l’experiment i la dinàmica del satèl•lit.