970 resultados para 1403 Econometrics
Resumo:
Aquest treball és l'anàlisi lingüística diacrònica d'un procés judicial sobre proxenetisme celebrat entre el mes de desembre de 1403 i el mes d'abril de 1404 a Binissalem (Mallorca). L'interès per la llengua no literària dels textos catalans medievals n'és el seu l'origen. L'estudi que presentam s'ha dividit en diferents seccions en què analitzam els nivells lingüístics (grafies i fonètica, morfosintaxi i lèxic). Els resultats aconseguits ens han permès confirmar i aportar més informació sobre l'estat de la llengua a la Mallorca medieval, entre els quals destacam alguns trets innovadors que detallam a l'anàlisi
Resumo:
BACKGROUND: Complications associated with intrathecal pumps may be linked to the surgical procedure, the implanted device, or the medication itself.¦CASE REPORTS: Three patients treated chronically with intrathecal clonidine presented with clonidine overdose due to inadvertent extravasation during the refilling procedure. All patients experienced loss of consciousness and severe systemic hypertension that required aggressive parenteral treatment.¦DISCUSSION: Clonidine is an alpha-2 agonist with a nearly 100% bioavailability after oral or rectal administration. With high plasma concentration secondary to massive systemic overdose, the specificity for the alpha-2 receptor is lost and an alpha-1 agonist activity predominates and causes marked hypertension. Management of clonidine overdose consists of supportive therapy guided by signs and symptoms.¦CONCLUSION: Inadvertent injection into the subcutaneous pocket rather than the reservoir is rare but very dangerous as the drug cannot be retrieved and massive doses are involved. Signs and symptoms of systemic overdose with drugs commonly used in implanted drugs delivery system should be well known to ensure early diagnosis and treatment.
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.
Resumo:
Background:In January 2011 Spain modified clean air legislation in force since 2006, removing all existing exceptions applicable to hospitality venues. Although this legal reform was backed by all political parties with parliamentary representation, the government's initiative was contested by the tobacco industry and its allies in the hospitality industry. One of the most voiced arguments against the reform was its potentially disruptive effect on the revenue of hospitality venues. This paper evaluates the impact of this reform on household expenditure at restaurants and bars and cafeterias. Methods and empirical strategy:We use micro-data from the Encuesta de Presupuestos Familiares (EPF) for years 2006 to 2012 to estimate "two part" models where the probability of observing a positive expenditure and, for those who spend, the expected level of expenditure are functions of an array of explanatory variables. We apply a before-after analysis with a wide range of controls for confounding factors and a flexible modeling of time effects.Results:In line with the majority of studies that analyze the effects of smoking bans using objective data, our results suggest that the reform did not cause reductions in households' expenditures on restaurant services or on bars and cafeteria services.
Resumo:
Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
This paper suggests a method for obtaining efficiency bounds in models containing either only infinite-dimensional parameters or both finite- and infinite-dimensional parameters (semiparametric models). The method is based on a theory of random linear functionals applied to the gradient of the log-likelihood functional and is illustrated by computing the lower bound for Cox's regression model
Resumo:
Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LFand WF). The two formats suggest two alternative model approaches for analyzing paneldata: (i) univariate regression with varying intercept; and (ii) multivariate regression withlatent variables (a particular case of structural equation model, SEM). The present papercompares the two approaches showing in which circumstances they yield equivalent?insome cases, even numerically equal?results. We show that the univariate approach givesresults equivalent to the multivariate approach when restrictions of time invariance (inthe paper, the TI assumption) are imposed on the parameters of the multivariate model.It is shown that the restrictions implicit in the univariate approach can be assessed bychi-square difference testing of two nested multivariate models. In addition, commontests encountered in the econometric analysis of panel data, such as the Hausman test, areshown to have an equivalent representation as chi-square difference tests. Commonalitiesand differences between the univariate and multivariate approaches are illustrated usingan empirical panel data set of firms' profitability as well as a simulated panel data.
Resumo:
We propose new methods for evaluating predictive densities that focus on the models' actual predictive ability in finite samples. The tests offer a simple way of evaluatingthe correct specification of predictive densities, either parametric or non-parametric.The results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities. An empirical application to the Survey ofProfessional Forecasters and a baseline Dynamic Stochastic General Equilibrium modelshows the usefulness of our methodology.
Resumo:
The three essays constituting this thesis focus on financing and cash management policy. The first essay aims to shed light on why firms issue debt so conservatively. In particular, it examines the effects of shareholder and creditor protection on capital structure choices. It starts by building a contingent claims model where financing policy results from a trade-off between tax benefits, contracting costs and agency costs. In this setup, controlling shareholders can divert part of the firms' cash ows as private benefits at the expense of minority share- holders. In addition, shareholders as a class can behave strategically at the time of default leading to deviations from the absolute priority rule. The analysis demonstrates that investor protection is a first order determinant of firms' financing choices and that conflicts of interests between firm claimholders may help explain the level and cross-sectional variation of observed leverage ratios. The second essay focuses on the practical relevance of agency conflicts. De- spite the theoretical development of the literature on agency conflicts and firm policy choices, the magnitude of manager-shareholder conflicts is still an open question. This essay proposes a methodology for quantifying these agency conflicts. To do so, it examines the impact of managerial entrenchment on corporate financing decisions. It builds a dynamic contingent claims model in which managers do not act in the best interest of shareholders, but rather pursue private benefits at the expense of shareholders. Managers have discretion over financing and dividend policies. However, shareholders can remove the manager at a cost. The analysis demonstrates that entrenched managers restructure less frequently and issue less debt than optimal for shareholders. I take the model to the data and use observed financing choices to provide firm-specific estimates of the degree of managerial entrenchment. Using structural econometrics, I find costs of control challenges of 2-7% on average (.8-5% at median). The estimates of the agency costs vary with variables that one expects to determine managerial incentives. In addition, these costs are sufficient to resolve the low- and zero-leverage puzzles and explain the time series of observed leverage ratios. Finally, the analysis shows that governance mechanisms significantly affect the value of control and firms' financing decisions. The third essay is concerned with the documented time trend in corporate cash holdings by Bates, Kahle and Stulz (BKS,2003). BKS find that firms' cash holdings double from 10% to 20% over the 1980 to 2005 period. This essay provides an explanation of this phenomenon by examining the effects of product market competition on firms' cash holdings in the presence of financial constraints. It develops a real options model in which cash holdings may be used to cover unexpected operating losses and avoid inefficient closure. The model generates new predictions relating cash holdings to firm and industry characteristics such as the intensity of competition, cash flow volatility, or financing constraints. The empirical examination of the model shows strong support of model's predictions. In addition, it shows that the time trend in cash holdings documented by BKS can be at least partly attributed to a competition effect.
Resumo:
[spa] En este artículo, analizamos la volatilidad agregada de una economía estilizada donde los agentes estann conectados en redes. Si hay relaciones estratégicas entre las acciones de los agentes, choques idiosincráticos pueden generar fluctuaciones agregadas. Demonstramos que la volatilidad agregada depende de la estructura de redes de la economía de dos maneras. Por un lado, si hay más conexiones en la economía en su conjunto, la volatilidad agregada es más baja. Por otro lado, si las conexiones están más concentradas, la volatilidad agregada es más alta. Presentamos una aplicación de nuestras predicciones teóricas que utiliza datos de EEUU de conexiones intrasectoriales y de diversificación de las empresas.
Resumo:
[spa] En este artículo, analizamos la volatilidad agregada de una economía estilizada donde los agentes estann conectados en redes. Si hay relaciones estratégicas entre las acciones de los agentes, choques idiosincráticos pueden generar fluctuaciones agregadas. Demonstramos que la volatilidad agregada depende de la estructura de redes de la economía de dos maneras. Por un lado, si hay más conexiones en la economía en su conjunto, la volatilidad agregada es más baja. Por otro lado, si las conexiones están más concentradas, la volatilidad agregada es más alta. Presentamos una aplicación de nuestras predicciones teóricas que utiliza datos de EEUU de conexiones intrasectoriales y de diversificación de las empresas.