124 resultados para imputation hedonic method
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced
Resumo:
Current methods for constructing house price indices are based on comparisons of sale prices of residential properties sold two or more times and on regression of the sale prices on the attributes of the properties and of their locations. The two methods have well recognised deficiencies, selection bias and model assumptions, respectively. We introduce a new method based on propensity score matching. The average house prices for two periods are compared by selecting pairs of properties, one sold in each period, that are as similar on a set of available attributes (covariates) as is feasible to arrange. The uncertainty associated with such matching is addressed by multiple imputation, framing the problem as involving missing values. The method is applied to aregister of transactions ofresidential properties in New Zealand and compared with the established alternatives.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Proyecto de investigación realizado a partir de una estancia en el Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC), Argentina, entre febrero y abril del 2007. La simulación numérica de problemas de mezclas mediante el Particle Finite Element Method (PFEM) es el marco de estudio de una futura tesis doctoral. Éste es un método desarrollado conjuntamente por el CIMEC y el Centre Internacional de Mètodos Numèrics en l'Enginyeria (CIMNE-UPC), basado en la resolución de las ecuaciones de Navier-Stokes en formulación Lagrangiana. El mallador ha sido implementado y desarrollado por Dr. Nestor Calvo, investigador del CIMEC. El desarrollo del módulo de cálculo corresponde al trabajo de tesis de la beneficiaria. La correcta interacción entre ambas partes es fundamental para obtener resultados válidos. En esta memoria se explican los principales aspectos del mallador que fueron modificados (criterios de refinamiento geométrico) y los cambios introducidos en el módulo de cálculo (librería PETSc, algoritmo predictor-corrector) durante la estancia en el CIMEC. Por último, se muestran los resultados obtenidos en un problema de dos fluidos inmiscibles con transferencia de calor.
Resumo:
We propose a mixed finite element method for a class of nonlinear diffusion equations, which is based on their interpretation as gradient flows in optimal transportation metrics. We introduce an appropriate linearization of the optimal transport problem, which leads to a mixed symmetric formulation. This formulation preserves the maximum principle in case of the semi-discrete scheme as well as the fully discrete scheme for a certain class of problems. In addition solutions of the mixed formulation maintain exponential convergence in the relative entropy towards the steady state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. We demonstrate the behavior of the proposed scheme with 2D simulations of the porous medium equations and blow-up questions in the Patlak-Keller-Segel model.
Resumo:
The studies of Giacomo Becattini concerning the notion of the "Marshallian industrial district" have led a revolution in the field of economic development around the world. The paper offers an interpretation of the methodology adopted by Becattini. The roots are clearly Marshallian. Becattini proposes a return to the economy as a complex social science that operates in historical time. We adopt a Schumpeterian approach to the method in economic analysis in order to highlight the similarities between the Marshall and Becattini's approach. Finally the paper uses the distinction between logical time, real time and historical time which enable us to study the "localized" economic process in a Becattinian way.
Resumo:
This paper computes and compares alternative quality-adjusted price indexes for new cars in Spain in the period 1990-2000. The proposed hedonic approach simultaneously controls for time-invariant unobserved product e¤ects and time-variant unobserved quality changes, that are assumed to be captured by model age effects. The results show that the non-adjusted price index largely overstates the increase in the cost of living induced by changes in car prices and that previous evidence for this market have not measured the real extent of that bias, probably due to the omission of controls for unobservables. It is also shown that omitting age effects can also lead to misleading conclusions. The estimated price indexes give also some insights on what could have been the determinants of price evolution in the Spanish car market.
Resumo:
In this paper we present a new, accurate form of the heat balance integral method, termed the Combined Integral Method (or CIM). The application of this method to Stefan problems is discussed. For simple test cases the results are compared with exact and asymptotic limits. In particular, it is shown that the CIM is more accurate than the second order, large Stefan number, perturbation solution for a wide range of Stefan numbers. In the initial examples it is shown that the CIM reduces the standard problem, consisting of a PDE defined over a domain specified by an ODE, to the solution of one or two algebraic equations. The latter examples, where the boundary temperature varies with time, reduce to a set of three first order ODEs.
Resumo:
This paper computes and compares alternative quality-adjusted price indexes for new cars in Spain in the period 1990-2000. The proposed hedonic approach simultaneously controls for time-invariant unobserved product effects and time-variant unobserved quality changes, that are assumed to be captured by model age effects. The results show that the non-adjusted price index largely overstates the increase in the cost of living induced by changes in car prices and that previous evidence for this market have not measured the real extent of that bias, probably due to the omission of controls for unobservables. It is also shown that omitting age effects can also lead to misleading conclusions. The estimated price indexes give also some insights on what could have been the determinants of price evolution in the Spanish car market. JEL classi…fication numbers: C43, E31, L11, L13, Keywords: Hedonic price indexes, Spanish car market, car prices, CPI, Cost of living