962 resultados para Auctions Econometrics
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model
Resumo:
Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LFand WF). The two formats suggest two alternative model approaches for analyzing paneldata: (i) univariate regression with varying intercept; and (ii) multivariate regression withlatent variables (a particular case of structural equation model, SEM). The present papercompares the two approaches showing in which circumstances they yield equivalent?insome cases, even numerically equal?results. We show that the univariate approach givesresults equivalent to the multivariate approach when restrictions of time invariance (inthe paper, the TI assumption) are imposed on the parameters of the multivariate model.It is shown that the restrictions implicit in the univariate approach can be assessed bychi-square difference testing of two nested multivariate models. In addition, commontests encountered in the econometric analysis of panel data, such as the Hausman test, areshown to have an equivalent representation as chi-square difference tests. Commonalitiesand differences between the univariate and multivariate approaches are illustrated usingan empirical panel data set of firms' profitability as well as a simulated panel data.
Resumo:
We propose new methods for evaluating predictive densities that focus on the models' actual predictive ability in finite samples. The tests offer a simple way of evaluatingthe correct specification of predictive densities, either parametric or non-parametric.The results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities. An empirical application to the Survey ofProfessional Forecasters and a baseline Dynamic Stochastic General Equilibrium modelshows the usefulness of our methodology.
Resumo:
Internetin yleistyminen on luonut uudenlaiset puitteet yritysten väliselle sähköiselle kaupankäynnille. Vanhat EDI-pohjaiset järjestelmät koetaan kalliiksi ja sitoviksi ja niiden sijasta rakennetaan www-teknologiaan pohjautuvia järjestelmiä. Tämän pro gradu-tutkielman tarkoituksena oli kuvata mahdollisimman tarkkaan erään web-pohjaisen hankintatyökalun kehittämis- ja käyttöönottoprojekti sekä arvioida sen onnistumista sekä käyttöönoton vaikutuksia. Case yrityksenä tutkielmassa toimii valikoima- ja logistiikkayhtiö Inex Partners Oy. Tutkielma on luonteeltaan toimintatutkimus, sillä tutkija on osa tutkimuskohdetta. Työn teoriaosassa käsitellään hankintatoimen strategiaa toimittajien valinnan ja toimittajasuhteiden hallinnan näkökulmasta. Tämän jälkeen esitellään sähköistä kaupankäyntiä ja sen hankintatoimelle tarjoamia sovelluksia. Kehitysprojektin tuotoksena syntyi sähköinen SRM-toimittajaportaali, joka mahdollistaa tarjouspyyntöjen lähettämisen ja vastaanottamisen Inexin ja tavarantoimittajien välillä. Portaali on integroitu Inexin toiminnanohjausjärjestelmään (ERP) siten, että tarjouspyyntöjen perusteella muodostuneet ostotilaukset siirtyvät järjestelmästä toiseen. Tutkielman johtopäätöksinä esitetään, että läpikäyty kehitysprojekti oli lähtökohtaisesti oikeintoteutettu ja yrityksen strategian mukainen. Tuotoksena saadun toimittajaportaalin istuvuutta tuoteryhmän hankintastrategiaan ei kuitenkaan oltu otettu tarpeeksi huomioon. Teknisen toteutuksen monimutkaisuus asettaa myös haasteensa työkalun käytölle. Jatkokehitystarpeiksi puolestaan esitetään portaalin räätälöimistä Inexin muiden tuoteryhmien hankinnan tueksi sekä portaalin huutokauppa- toiminnallisuuden kehittämistä ja käyttöönotttoa.
Resumo:
The three essays constituting this thesis focus on financing and cash management policy. The first essay aims to shed light on why firms issue debt so conservatively. In particular, it examines the effects of shareholder and creditor protection on capital structure choices. It starts by building a contingent claims model where financing policy results from a trade-off between tax benefits, contracting costs and agency costs. In this setup, controlling shareholders can divert part of the firms' cash ows as private benefits at the expense of minority share- holders. In addition, shareholders as a class can behave strategically at the time of default leading to deviations from the absolute priority rule. The analysis demonstrates that investor protection is a first order determinant of firms' financing choices and that conflicts of interests between firm claimholders may help explain the level and cross-sectional variation of observed leverage ratios. The second essay focuses on the practical relevance of agency conflicts. De- spite the theoretical development of the literature on agency conflicts and firm policy choices, the magnitude of manager-shareholder conflicts is still an open question. This essay proposes a methodology for quantifying these agency conflicts. To do so, it examines the impact of managerial entrenchment on corporate financing decisions. It builds a dynamic contingent claims model in which managers do not act in the best interest of shareholders, but rather pursue private benefits at the expense of shareholders. Managers have discretion over financing and dividend policies. However, shareholders can remove the manager at a cost. The analysis demonstrates that entrenched managers restructure less frequently and issue less debt than optimal for shareholders. I take the model to the data and use observed financing choices to provide firm-specific estimates of the degree of managerial entrenchment. Using structural econometrics, I find costs of control challenges of 2-7% on average (.8-5% at median). The estimates of the agency costs vary with variables that one expects to determine managerial incentives. In addition, these costs are sufficient to resolve the low- and zero-leverage puzzles and explain the time series of observed leverage ratios. Finally, the analysis shows that governance mechanisms significantly affect the value of control and firms' financing decisions. The third essay is concerned with the documented time trend in corporate cash holdings by Bates, Kahle and Stulz (BKS,2003). BKS find that firms' cash holdings double from 10% to 20% over the 1980 to 2005 period. This essay provides an explanation of this phenomenon by examining the effects of product market competition on firms' cash holdings in the presence of financial constraints. It develops a real options model in which cash holdings may be used to cover unexpected operating losses and avoid inefficient closure. The model generates new predictions relating cash holdings to firm and industry characteristics such as the intensity of competition, cash flow volatility, or financing constraints. The empirical examination of the model shows strong support of model's predictions. In addition, it shows that the time trend in cash holdings documented by BKS can be at least partly attributed to a competition effect.
Resumo:
[spa] En este artículo, analizamos la volatilidad agregada de una economía estilizada donde los agentes estann conectados en redes. Si hay relaciones estratégicas entre las acciones de los agentes, choques idiosincráticos pueden generar fluctuaciones agregadas. Demonstramos que la volatilidad agregada depende de la estructura de redes de la economía de dos maneras. Por un lado, si hay más conexiones en la economía en su conjunto, la volatilidad agregada es más baja. Por otro lado, si las conexiones están más concentradas, la volatilidad agregada es más alta. Presentamos una aplicación de nuestras predicciones teóricas que utiliza datos de EEUU de conexiones intrasectoriales y de diversificación de las empresas.
Resumo:
[spa] En este artículo, analizamos la volatilidad agregada de una economía estilizada donde los agentes estann conectados en redes. Si hay relaciones estratégicas entre las acciones de los agentes, choques idiosincráticos pueden generar fluctuaciones agregadas. Demonstramos que la volatilidad agregada depende de la estructura de redes de la economía de dos maneras. Por un lado, si hay más conexiones en la economía en su conjunto, la volatilidad agregada es más baja. Por otro lado, si las conexiones están más concentradas, la volatilidad agregada es más alta. Presentamos una aplicación de nuestras predicciones teóricas que utiliza datos de EEUU de conexiones intrasectoriales y de diversificación de las empresas.
Resumo:
Después de un período de crecimiento urbanístico desorbitado es oportuno hacer un primer balance de las consecuencias que este hecho ha tenido sobre la estructura espacial de las principales urbes españolas. Un elemento básico de análisis es las variaciones que se han operado sobre la densidad de población. En el presente artículo se estudian estas transformaciones mediante modelos econométricos de densidad urbana. Las metrópolis investigadas son Madrid, Barcelona, Valencia, Sevilla, Bilbao y Zaragoza y el periodo temporal abarca desde 2001 a 2007. Los resultados indican, para las metrópolis más pobladas, cambios significativos en los parámetros, y en dos casos en la propia forma funcional de la densidad.
Resumo:
Tutkimuksen tavoitteena oli analysoida liiketoimintamalleihin liittyviä teorioita ja erilaisten mallien pohjalta rakentaa selkeä teoria, jota yritykset voivat käyttää määritellessään ja analysoidessaan liiketoimintamalleja. Tutkimuksen kohteena olleet yritykset voitiin jaotella sisäisesti fokusoituneisiin ja ulkoisesti suuntautuneisiin. Jaottelun pohjalta oli mahdollista tehdä johtopäätöksiä koskien liiketoimintamallien potentiaalia. Tutkimus oli luonteeltaan kvalitatiivinen. Tutkimuksen tuloksena on liiketoimintamallien rakentamiseen ja analysointiin sopiva työkalu, jota voidaan käyttää yrityksen strategisessa suunnittelussa.
Resumo:
Sähköiset huutokaupat ovat virtuaalisia markkinapaikkoja, jotka sijaitsevat jossain päin internetiä. Sähköistä huutokauppaa käydään yritysten välillä (B2B), yritysten ja kuluttajien välillä (B2C) sekä kuluttajien kesken (C2C). Tässä työssä sähköisellä huutokaupalla tarkoitetaan ensin mainittua, yritysten keskinäistä kaupankäyntiä. Työn tarkoituksena on tutkia työnkulkukoneen soveltuvuutta sähköisen huutokauppajärjestelmän moottorina. Työssä perehdytään avoimen lähdekoodin ActiveBPEL-koneeseen, ja tutkimus tapahtuu suunnittelemalla, mallintamalla ja testaamalla liiketoimintaprosessi, joka rekisteröi ostajan ja myyjän tiedot järjestelmään. Toteutettava prosessi on yksi osa sähköistä huutokauppaa, mutta saman periaatteen mukaisesti olisi mahdollista toteuttaa myös kokonainen huutokauppa. Tässä työssä tarkastellaan sähköistä huutokauppaa, joka perustuu web-palveluihin, ja jolla on selvä koordinaattori. Koordinaattori ohjaa toisia mukana olevia web-palveluja ja niiden ajettavia operaatioita. Korkean tason mallit kuvataan BPMN-notaation avulla, itse prosessi toteutetaan BPEL-kielellä. Prosessin mallinnuksessa ja simuloinnissa käytetään apuna ActiveBPEL Designer -ohjelmaa. Työn tavoitteena on paitsi toteuttaa osa huutokaupasta, myös antaa lukijalle käsitys siitä liiketoimintaympäristöstä, johon huutokauppa kuuluu, sekä valottaa huutokaupan taustalla olevia teknologioita. Erityisesti web-palvelut ja niihin liittyvät käsitteet tulevat lukijalle tutuiksi.