969 resultados para Semi-parametric models
Resumo:
We examine the impact of seller's Property Condition Disclosure Law on the residential real estate values. A disclosure law may address the information asymmetry in housing transactions shifting of risk from buyers and brokers to the sellers and raising housing prices as a result. We combine propensity score techniques from the treatment effects literature with a traditional event study approach. We assemble a unique set of economic and institutional attributes for a quarterly panel of 291 US Metropolitan Statistical Areas (MSAs) and 50 US States spanning 21 years from 1984 to 2004 is used to exploit the MSA level variation in house prices. The study finds that the average seller may be able to fetch a higher price (about three to four percent) for the house if she furnishes a state-mandated seller.s property condition disclosure statement to the buyer. When we compare the results from parametric and semi-parametric event analyses, we find that the semi-parametric or the propensity score analysis generals moderately larger estimated effects of the law on housing prices.
Resumo:
An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^
Resumo:
The Fractal Image Informatics toolbox (Oleschko et al., 2008 a; Torres-Argüelles et al., 2010) was applied to extract, classify and model the topological structure and dynamics of surface roughness in two highly eroded catchments of Mexico. Both areas are affected by gully erosion (Sidorchuk, 2005) and characterized by avalanche-like matter transport. Five contrasting morphological patterns were distinguished across the slope of the bare eroded surface of Faeozem (Queretaro State) while only one (apparently independent on the slope) roughness pattern was documented for Andosol (Michoacan State). We called these patterns ?the roughness clusters? and compared them in terms of metrizability, continuity, compactness, topological connectedness (global and local) and invariance, separability, and degree of ramification (Weyl, 1937). All mentioned topological measurands were correlated with the variance, skewness and kurtosis of the gray-level distribution of digital images. The morphology0 spatial dynamics of roughness clusters was measured and mapped with high precision in terms of fractal descriptors. The Hurst exponent was especially suitable to distinguish between the structure of ?turtle shell? and ?ramification? patterns (sediment producing zone A of the slope); as well as ?honeycomb? (sediment transport zone B) and ?dinosaur steps? and ?corals? (sediment deposition zone C) roughness clusters. Some other structural attributes of studied patterns were also statistically different and correlated with the variance, skewness and kurtosis of gray distribution of multiscale digital images. The scale invariance of classified roughness patterns was documented inside the range of five image resolutions. We conjectured that the geometrization of erosion patterns in terms of roughness clustering might benefit the most semi-quantitative models developed for erosion and sediment yield assessments (de Vente and Poesen, 2005).
Resumo:
Wind power time series usually show complex dynamics mainly due to non-linearities related to the wind physics and the power transformation process in wind farms. This article provides an approach to the incorporation of observed local variables (wind speed and direction) to model some of these effects by means of statistical models. To this end, a benchmarking between two different families of varying-coefficient models (regime-switching and conditional parametric models) is carried out. The case of the offshore wind farm of Horns Rev in Denmark has been considered. The analysis is focused on one-step ahead forecasting and a time series resolution of 10 min. It has been found that the local wind direction contributes to model some features of the prevailing winds, such as the impact of the wind direction on the wind variability, whereas the non-linearities related to the power transformation process can be introduced by considering the local wind speed. In both cases, conditional parametric models showed a better performance than the one achieved by the regime-switching strategy. The results attained reinforce the idea that each explanatory variable allows the modelling of different underlying effects in the dynamics of wind power time series.
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.
Resumo:
El desarrollo de actividades de carga y descarga son parte de la esencia de la naturaleza funcional de un puerto, de las cuales derivan en gran medida los ingresos del mismo y la eficiencia de la cadena logística en su conjunto. Las oscilaciones en el interior de una dársena y en un línea de atraque disminuyen la calidad de la estancia de las embarcaciones en puerto, reducen el rendimiento de la estiba de los buques y solicitan y fatigan las estructuras y los cuerpos flotantes amarrados. Si los parámetros que definen la agitación local se aproximan a regiones de fallo 0 parada, el subsistema pierde rendimiento, fiabilidad y finalmente se paralizan las operaciones, produciéndose de este modo tiempos de inactividad. Estas paradas operativas conllevan pérdidas económicas para la terminal y, consecuentemente, para el puerto. Hoy día se dispone vastas redes de monitorización destinadas a la caracterización del medio físico en el entorno de los puertos. Paralelamente, las operaciones de manipulación de cargas en las terminales se están dirigiendo hacia modelos de automatización o semi automatización, que permiten no sólo la sistematización de procesos, sino también un profundo conocimiento del flujo de tareas. En este contexto hay un déficit de información sobre cómo afectan los diferentes forzadores del medio físico al rendimiento, la seguridad funcionalidad del proceso de manipulación de carga y descarga. Esto se debe en gran medida a la falta de registros dilatados en el tiempo que permitan correlacionar todos los aspectos mencionados de un modo particularizado para cada línea de atraque y amarre de un puerto. En esta tesis se desarrolla una metodología de vídeo monitorización no intrusiva y de bajo coste basada en la aplicación de técnicas "pixel tool' y la obtención de los parámetros extrínsecos de una observación monofocal. Con ello pretende poner en valor las infraestructuras de vídeo vigilancia de los puertos y de los laboratorios de experimentación a escala reducida, con el objeto de facilitar el estudio los umbrales operativos de las áreas de atraque y amarre. The development of loading and unloading activities is an essential part of he functional nature of a port, which derive largely from he same income and the efficiency of he supply chain as a whole. The oscillations inside a dock and a mooring line diminish he quality of the stay of vessels in port reducing the performance of the stowage of ship and asking and fatigued structures and moored floating bodies. If the parameters defining the local al agitation regions are close to areas of failure or shutdown, he subsystem looses performance, reliability and eventually paralyzes the operations, thereby producing downtime. These operational stops entail economic 1osses to the terminal and, consequently for the port. Today vast networks of monitoring, aimed at he characterization of the physical environment in the vicinity of he ports, are available. In parallel, the cargo handling operations at terminals are moving towards automation or semi-automation models that allow not only the systematization of processes, but also a deep understanding of he workflow. In this context, there is a lack of information about how the different forcing agents of the physical environment affect the performance and he functional safety of the loading and unloading process. This is due largely to the lack of spread-over-time records which would allow to correlate all aspects mentioned, specifically, for each berthing and mooring of a port. This thesis develops a methodology for non-intrusive and low cost monitoring video based on the application of "pixel tool" techniques and on obtaining the extrinsic parameters of a monofocal observation. It seeks an enhancement of the video monitoring infrastructure at ports and at experimental laboratories of reduced scale, in order to facilitate the study of operational thresholds berthing and mooring areas.
Resumo:
We consider a robust version of the classical Wald test statistics for testing simple and composite null hypotheses for general parametric models. These test statistics are based on the minimum density power divergence estimators instead of the maximum likelihood estimators. An extensive study of their robustness properties is given though the influence functions as well as the chi-square inflation factors. It is theoretically established that the level and power of these robust tests are stable against outliers, whereas the classical Wald test breaks down. Some numerical examples confirm the validity of the theoretical results.
Resumo:
El objetivo del presente estudio consiste en analizar el impacto que la publicación de la noticia de obtención de un certificado de calidad (ISO 9000) tiene sobre el valor de mercado de la empresa y sobre la volatilidad del precio de cotización de las acciones. La muestra utilizada incluye todas las empresas que, habiendo obtenido un certificado de calidad, han cotizado en el mercado secundario de valores español entre los años 1993 y 1999. Para medir el impacto de la obtención un certificado de calidad sobre los resultados se ha analizado los excesos de rentabilidad, mientras para medir la variación en la volatilidad se han realizado cuatro test, dos paramétricos, uno no paramétrico y una propuesta de test semiparamétrico. Los resultados indican que el mercado de capitales reacciona positivamente a la obtención de este certificado, provocando además un incremento en la volatilidad de los precios de cotización.
Resumo:
El objetivo del presente estudio consiste en analizar el impacto que la publicación de la noticia de obtención de un certificado de calidad (ISO 9000) tiene sobre el valor de mercado de la empresa y sobre la volatilidad del precio de cotización de las acciones. Adicionalmente se examinan diversos factores determinantes del impacto de la obtención del certificado sobre la rentabilidad. La muestra utilizada incluye todas las empresas que, habiendo cotizado en el mercado continuo entre los años 1993 y 1999, han obtenido un certificado de calidad. Para medir el impacto de la obtención de un certificado de calidad sobre los resultados se ha analizado los excesos de rentabilidad, mientras que para medir la variación en la volatilidad se han realizado cuatro test, dos paramétricos, uno no paramétrico y una propuesta de test semiparamétrico. Los resultados indican que el mercado reacciona positivamente a la obtención de este certificado, provocando además un incremento en la volatilidad de los precios de cotización.
Resumo:
Context. The X-ray spectra observed in the persistent emission of magnetars are evidence for the existence of a magnetosphere. The high-energy part of the spectra is explained by resonant cyclotron upscattering of soft thermal photons in a twisted magnetosphere, which has motivated an increasing number of efforts to improve and generalize existing magnetosphere models. Aims. We want to build more general configurations of twisted, force-free magnetospheres as a first step to understanding the role played by the magnetic field geometry in the observed spectra. Methods. First we reviewed and extended previous analytical works to assess the viability and limitations of semi-analytical approaches. Second, we built a numerical code able to relax an initial configuration of a nonrotating magnetosphere to a force-free geometry, provided any arbitrary form of the magnetic field at the star surface. The numerical code is based on a finite-difference time-domain, divergence-free, and conservative scheme, based of the magneto-frictional method used in other scenarios. Results. We obtain new numerical configurations of twisted magnetospheres, with distributions of twist and currents that differ from previous analytical solutions. The range of global twist of the new family of solutions is similar to the existing semi-analytical models (up to some radians), but the achieved geometry may be quite different. Conclusions. The geometry of twisted, force-free magnetospheres shows a wider variety of possibilities than previously considered. This has implications for the observed spectra and opens the possibility of implementing alternative models in simulations of radiative transfer aiming at providing spectra to be compared with observations.
Resumo:
This paper investigates the impact of subsidies from the common agricultural policy on the total factor productivity of farms in the EU. We employ a structural, semi-parametric estimation algorithm, directly incorporating the effect of subsidies into a model of unobserved productivity. We empirically study the effects using samples from the Farm Accountancy Data Network for EU-15 countries. Our main findings are clear: subsidies had a negative impact on farm productivity in the period before the decoupling reform was implemented; after decoupling the effect of subsidies on productivity was more nuanced, as in several countries it turned positive.
Resumo:
The recent deregulation in electricity markets worldwide has heightened the importance of risk management in energy markets. Assessing Value-at-Risk (VaR) in electricity markets is arguably more difficult than in traditional financial markets because the distinctive features of the former result in a highly unusual distribution of returns-electricity returns are highly volatile, display seasonalities in both their mean and volatility, exhibit leverage effects and clustering in volatility, and feature extreme levels of skewness and kurtosis. With electricity applications in mind, this paper proposes a model that accommodates autoregression and weekly seasonals in both the conditional mean and conditional volatility of returns, as well as leverage effects via an EGARCH specification. In addition, extreme value theory (EVT) is adopted to explicitly model the tails of the return distribution. Compared to a number of other parametric models and simple historical simulation based approaches, the proposed EVT-based model performs well in forecasting out-of-sample VaR. In addition, statistical tests show that the proposed model provides appropriate interval coverage in both unconditional and, more importantly, conditional contexts. Overall, the results are encouraging in suggesting that the proposed EVT-based model is a useful technique in forecasting VaR in electricity markets. (c) 2005 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
Resumo:
We present new measurements of the luminosity function (LF) of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) and the 2dF SDSS LRG and Quasar (2SLAQ) survey. We have carefully quantified, and corrected for, uncertainties in the K and evolutionary corrections, differences in the colour selection methods, and the effects of photometric errors, thus ensuring we are studying the same galaxy population in both surveys. Using a limited subset of 6326 SDSS LRGs (with 0.17 < z < 0.24) and 1725 2SLAQ LRGs (with 0.5 < z < 0.6), for which the matching colour selection is most reliable, we find no evidence for any additional evolution in the LRG LF, over this redshift range, beyond that expected from a simple passive evolution model. This lack of additional evolution is quantified using the comoving luminosity density of SDSS and 2SLAQ LRGs, brighter than M-0.2r - 5 log h(0.7) = - 22.5, which are 2.51 +/- 0.03 x 10(-7) L circle dot Mpc(-3) and 2.44 +/- 0.15 x 10(-7) L circle dot Mpc(-3), respectively (< 10 per cent uncertainty). We compare our LFs to the COMBO-17 data and find excellent agreement over the same redshift range. Together, these surveys show no evidence for additional evolution (beyond passive) in the LF of LRGs brighter than M-0.2r - 5 log h(0.7) = - 21 ( or brighter than similar to L-*).. We test our SDSS and 2SLAQ LFs against a simple 'dry merger' model for the evolution of massive red galaxies and find that at least half of the LRGs at z similar or equal to 0.2 must already have been well assembled (with more than half their stellar mass) by z similar or equal to 0.6. This limit is barely consistent with recent results from semi-analytical models of galaxy evolution.
Resumo:
We present a detailed investigation into the recent star formation histories of 5697 luminous red galaxies (LRGs) based on the H delta (4101 angstrom), and [O II] (3727 angstrom) lines and the D4000 index. LRGs are luminous (L > 3L*) galaxies which have been selected to have photometric properties consistent with an old, passively evolving stellar population. For this study, we utilize LRGs from the recently completed 2dF-SDSS LRG and QSO Survey (2SLAQ). Equivalent widths of the H delta and [O II] lines are measured and used to define three spectral types, those with only strong H delta absorption (k+a), those with strong [O II] in emission (em) and those with both (em+a). All other LRGs are considered to have passive star formation histories. The vast majority of LRGs are found to be passive (similar to 80 per cent); however, significant numbers of k+a (2.7 per cent), em+a (1.2 per cent) and em LRGs (8.6 per cent) are identified. An investigation into the redshift dependence of the fractions is also performed. A sample of SDSS MAIN galaxies with colours and luminosities consistent with the 2SLAQ LRGs is selected to provide a low-redshift comparison. While the em and em+a fractions are consistent with the low-redshift SDSS sample, the fraction of k+a LRGs is found to increase significantly with redshift. This result is interpreted as an indication of an increasing amount of recent star formation activity in LRGs with redshift. By considering the expected lifetime of the k+a phase, the number of LRGs which will undergo a k+a phase can be estimated. A crude comparison of this estimate with the predictions from semi-analytic models of galaxy formation shows that the predicted level of k+a and em+a activities is not sufficient to reconcile the predicted mass growth for massive early types in a hierarchical merging scenario.