25 resultados para mean and variance ratio
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Resumo:
This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.
Resumo:
En este trabajo se analiza el efecto de la selección de datos sobre las estimaciones de heredabilidad. Se estimó el valor de heredabilidad del tamaño de camada en una población porcina en la que los datos correspondientes a las cerdas más viejas eran una muestra seleccionada. Las estimaciones se obtuvieron usando distintos conjuntos de datos derivados de toda la información disponible. Esos conjunto de datos se compararon evaluando su capacidad predictiva. Se vio que las estimaciones de heredabilidad obtenidas utilizando todos los datos disponibles correspondían a valores infraestimados. También se simuló un carácter materno y se generó un conjunto de datos seleccionados eliminando aquellos correspondientes a las hembras sin padres conocidos. Distintos modelos, habitualmente empleados cuando no existe selección de registros, se consideraron para estimar el valor de heredabilidad. Los resultados mostraron que ninguno de esos modelos ofrecía estimaciones insesgadas. Sólo los modelos que tenían en cuenta el efecto de la selección sobre la media residual y la media y varianza genéticas ofrecían estimaciones poco sesgadas. Sin embargo, para poder aplicarlos se debe conocer la selección realizada. El problema de la selección de datos es difícil de abordar cuando se desconoce cual es el proceso de selección que se ha realizado en una población.
Resumo:
Among the underlying assumptions of the Black-Scholes option pricingmodel, those of a fixed volatility of the underlying asset and of aconstantshort-term riskless interest rate, cause the largest empirical biases. Onlyrecently has attention been paid to the simultaneous effects of thestochasticnature of both variables on the pricing of options. This paper has tried toestimate the effects of a stochastic volatility and a stochastic interestrate inthe Spanish option market. A discrete approach was used. Symmetricand asymmetricGARCH models were tried. The presence of in-the-mean and seasonalityeffectswas allowed. The stochastic processes of the MIBOR90, a Spanishshort-terminterest rate, from March 19, 1990 to May 31, 1994 and of the volatilityofthe returns of the most important Spanish stock index (IBEX-35) fromOctober1, 1987 to January 20, 1994, were estimated. These estimators wereused onpricing Call options on the stock index, from November 30, 1993 to May30, 1994.Hull-White and Amin-Ng pricing formulas were used. These prices werecomparedwith actual prices and with those derived from the Black-Scholesformula,trying to detect the biases reported previously in the literature. Whereasthe conditional variance of the MIBOR90 interest rate seemed to be freeofARCH effects, an asymmetric GARCH with in-the-mean and seasonalityeffectsand some evidence of persistence in variance (IEGARCH(1,2)-M-S) wasfoundto be the model that best represent the behavior of the stochasticvolatilityof the IBEX-35 stock returns. All the biases reported previously in theliterature were found. All the formulas overpriced the options inNear-the-Moneycase and underpriced the options otherwise. Furthermore, in most optiontrading, Black-Scholes overpriced the options and, because of thetime-to-maturityeffect, implied volatility computed from the Black-Scholes formula,underestimatedthe actual volatility.
Resumo:
Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
We study the incentives to acquire skill in a model where heterogeneous firmsand workers interact in a labor market characterized by matching frictions and costlyscreening. When effort in acquiring skill raises both the mean and the variance of theresulting ability distribution, multiple equilibria may arise. In the high-effort equilibrium, heterogeneity in ability is sufficiently large to induce firms to select the bestworkers, thereby confirming the belief that effort is important for finding good jobs.In the low-effort equilibrium, ability is not sufficiently dispersed to justify screening,thereby confirming the belief that effort is not so important. The model has implications for wage inequality, the distribution of firm characteristics, sorting patternsbetween firms and workers, and unemployment rates that can help explaining observedcross-country variation in socio-economic and labor market outcomes.
Resumo:
Image registration has been proposed as an automatic method for recovering cardiac displacement fields from Tagged Magnetic Resonance Imaging (tMRI) sequences. Initially performed as a set of pairwise registrations, these techniques have evolved to the use of 3D+t deformation models, requiring metrics of joint image alignment (JA). However, only linear combinations of cost functions defined with respect to the first frame have been used. In this paper, we have applied k-Nearest Neighbors Graphs (kNNG) estimators of the -entropy (H ) to measure the joint similarity between frames, and to combine the information provided by different cardiac views in an unified metric. Experiments performed on six subjects showed a significantly higher accuracy (p < 0.05) with respect to a standard pairwise alignment (PA) approach in terms of mean positional error and variance with respect to manually placed landmarks. The developed method was used to study strains in patients with myocardial infarction, showing a consistency between strain, infarction location, and coronary occlusion. This paper also presentsan interesting clinical application of graph-based metric estimators, showing their value for solving practical problems found in medical imaging.
Resumo:
The restricted maximum likelihood is preferred by many to the full maximumlikelihood for estimation with variance component and other randomcoefficientmodels, because the variance estimator is unbiased. It is shown that thisunbiasednessis accompanied in some balanced designs by an inflation of the meansquared error.An estimator of the cluster-level variance that is uniformly moreefficient than the fullmaximum likelihood is derived. Estimators of the variance ratio are alsostudied.
Resumo:
We present a detailed evaluation of the seasonal performance of the Community Multiscale Air Quality (CMAQ) modelling system and the PSU/NCAR meteorological model coupled to a new Numerical Emission Model for Air Quality (MNEQA). The combined system simulates air quality at a fine resolution (3 km as horizontal resolution and 1 h as temporal resolution) in north-eastern Spain, where problems of ozone pollution are frequent. An extensive database compiled over two periods, from May to September 2009 and 2010, is used to evaluate meteorological simulations and chemical outputs. Our results indicate that the model accurately reproduces hourly and 1-h and 8-h maximum ozone surface concentrations measured at the air quality stations, as statistical values fall within the EPA and EU recommendations. However, to further improve forecast accuracy, three simple bias-adjustment techniques mean subtraction (MS), ratio adjustment (RA), and hybrid forecast (HF) based on 10 days of available comparisons are applied. The results show that the MS technique performed better than RA or HF, although all the bias-adjustment techniques significantly reduce the systematic errors in ozone forecasts.
Resumo:
En aquest projecte s'ha investigat i desenvolupat una eina per a la detecció de falles mecàniques en entorns controlats. Aquest programa analitza l'espectre dels senyals acústics aconseguits mitjançant el micròfon del PC i n'estudia estadísticament les propietats –mitjana i desviació estàndard- per tal de poder diferenciar els nous senyals que capturi. S'ha experimentat amb el cas real d'un motor de corrent continu i s'han obtingut resultats de més del 90% d'encert a l'hora de detectar el seu estat.
Resumo:
La problemática de la investigación se plantea en el contexto de la filosofía trascendental de Kant, en relación al modo en que es en general posible para nosotros representarnos el ámbito de la moralidad. Nuestra comprensión natural o preteórica del funcionamiento del lenguaje parece llevarnos a entender el significado de nuestras palabras en términos de la relación que se establece entre el signo lingüístico y el objeto: nuestros términos lingüísticos están en el lugar del objeto extralingüístico a que refieren y que constituye su significado. A nuestro modo de ver, la afirmación kantiana relativa a que todo nuestro conocimiento comienza con la experiencia, es decir, con aquello que procede de los sentidos, parece estar apuntando a esta intuición fundamental. Ahora bien, la cuestión que cabe plantearse es: de acuerdo con este modelo de significación, ¿cuál es el significado de nuestros términos morales? Si, con Kant, aceptamos que el concepto de deber moral exige el cumplimiento (u omisión) incondicionado de una acción y que, precisamente por las exigencias de universalidad y necesidad que le son inherentes, tal concepto es inderivable de la experiencia, cabe preguntarse cuál es el significado del concepto de deber en sentido moral (y, en general, de los términos morales) y de qué manera somos capaces de representárnoslo. Mi investigación ha pretendido esclarecer precisamente en qué sentido debe entenderse la afirmación kantiana de que en la reflexión sobre la corrección moral de nuestras acciones, para representarnos las exigencias de universalidad y necesidad que son propias del concepto de deber moral, nos servimos analógicamente del concepto de naturaleza, así como analizar la plausibilidad de la propuesta kantiana misma.
Resumo:
The enhanced flow in carbon nanotubes is explained using a mathematical model that includes a depletion layer with reduced viscosity near the wall. In the limit of large tubes the model predicts no noticeable enhancement. For smaller tubes the model predicts enhancement that increases as the radius decreases. An analogy between the reduced viscosity and slip-length models shows that the term slip-length is misleading and that on surfaces which are smooth at the nanoscale it may be thought of as a length-scale associated with the size of the depletion region and viscosity ratio. The model therefore provides a physical interpretation of the classical Navier slip condition and explains why `slip-lengths' may be greater than the tube radius.
Resumo:
Arts Fusió és un projecte pedagògic entorn a la transdisciplinarietat artística, consistent en una recerca al voltant d'aquest concepte i dels valors formatius que aporta en processos d'aprenentatge creatiu. A partir d'aquí hem creat uns principis metodològics per orientar una didàctica cap a la fusió del teatre, la dansa, la música i les arts visuals. Per posar de manifest els beneficis de la nostra proposta hem realitzat aplicacions didàctiques en educacio artística superior i en educació secundària obligatòria. Metodològicament ens situem en el paradigma de la complexitat i basem les intervencions en una perspectiva qualitativa exploratòria, en concret dintre de la línia de la investigació avaluativa. Com a resultats del treball hem vist que la transdisciplinarietat artística pot actuar com a mitjà i com a fi educatiu segons el context en el que ens trobem.