717 resultados para China, Capital structure, Dynamic panel data models, Listed property company
Resumo:
This study analyzes the effect of fiscal decentralization on health outcomes in China using a panel data set with nationwide county-level data. We find that counties in more fiscal decentralized provinces have lower infant mortality rates compared to those counties in which the provincial government retains the main spending authority, if certain conditions are met. Spending responsibilities at the local level need to be matched with county government's own fiscal capacity. For those local governments that have only limited revenues, their ability to spend on local public goods such as health care depends crucially upon intergovernmental transfers. The findings of this study thereby support the common assertion that fiscal decentralization can indeed lead to more efficient production of local public goods, but also highlights the necessary conditions to make this happen.
Financial permeation as a role of microfinance : has microfinance actually been helpful to the poor?
Resumo:
This article is distinct in its application of the logit transformation to the poverty ratio for the purpose of empirically examining whether the financial sector helps improve standards of living for low-income people. We propose the term financial permeation to describe how financial networks expand to spread money among the poor. We measure financial permeation by three indicators related to microfinance institutions (MFIs) and then examine its effect on poverty reduction at the macro level using panel data for 90 developing countries from 1995 to 2008. We find that financial permeation has a statistically significant and robust effect on decreasing the poverty ratio.
Resumo:
In 2000, Ramadan school vacation coincided with the original annual exam period of December in Bangladesh. This forced schools to pre-pone their final exam schedules in November, which was the month before the harvest begins. 'Ramadan 2000' is a natural experiment that reduced the labor demand for children during the exam period. Using household level panel data of 2000 and 2003, and after controlling for various unobservable variations including individual fixed effects, aggregate year effects, and subdistrict-level year effects, this paper finds evidence of statistically significant impact of seasonal labor demand on school dropout in Bangladesh among the children from agricultural households.
Resumo:
We propose a method for the decomposition of inequality changes based on panel data regression. The method is an efficient way to quantify the contributions of variables to changes of the Theil T index while satisfying the property of uniform addition. We illustrate the method using prefectural data from Japan for the period 1955 to 1998. Japan experienced a diminishing of regional income disparity during the years of high economic growth from 1955 to 1973. After estimating production functions using panel data for prefectures in Japan, we apply the new decomposition approach to identify each production factor’s contributions to the changes of per capita income inequality among prefectures. The decomposition results show that total factor productivity (residual) growth, population change (migration), and public capital stock growth contributed to the diminishing of per capita income disparity.
Resumo:
In this study, we examine the voting behavior in Indonesian parliamentary elections from 1999 to 2014. After summarizing the changes in Indonesian parties' share of the vote from a historical standpoint, we investigate the voting behavior with simple regression models to analyze the effect of regional characteristics on Islamic/secular parties' vote share, using aggregated panel data at the district level. Then, we also test the hypothesis of retrospective economic voting. The results show that districts which formerly stood strongly behind Islamic parties continued to select those parties, or gave preference to abstention over the parties in some elections. From the point of view of retrospective economic voting, we found that districts which experienced higher per capita economic growth gave more support to the ruling parties, although our results remain tentative because information on 2014 is not yet available.
Resumo:
This paper investigates the current situation of industrial agglomeration in Costa Rica, utilizing firm-level panel data for the period 2008-2012. We calculated Location Quotient and Theil Index based on employment by industry and found that 14 cantons have the industrial agglomerations for 9 industries. The analysis is in line with the nature of specific industries, the development of areas of concentration around free zones, and the evolving participation of Costa Rica in GVCs.
Resumo:
Previous literature generally predicts that individuals with higher skills work in industries with longer production chains. However, the opposite skill-sorting pattern, a "negative skill-sorting" phenomenon, is also observed in reality. This paper proposes a possible mechanism by which both cases can happen and shows that negative skill sorting is more likely to occur when the quality of intermediate inputs degrade rapidly (or improves slowly) along the production chain. We empirically confirm our theoretical prediction by using country-industry panel data. The results are robust regardless of estimation method, control variables, and industry coverage. This study has important implications for understanding countries' comparative advantages and development patterns.
Resumo:
The efficiency of power optimization tools depends on information on design power provided by the power estimation models. Power models targeting different power groups can enable fast identification of the most power consuming parts of design and their properties. The accuracy of these estimation models is highly dependent on the accuracy of the method used for their characterization. The highest precision is achieved by using physical onboard measurements. In this paper, we present a measurement methodology that is primarily aimed at calibrating and validating high-level dynamic power estimation models. The measurements have been carefully designed to enable the separation of the interconnect power from the logic power and the power of the clock circuitry, so that each of these power groups can be used for the corresponding model validation. The standard measurement uncertainty is lower than 2% of the measured value even with a very small number of repeated measurements. Additionally, the accuracy of a commercial low-level power estimation tool has been also assessed for comparison purposes. The results indicate that the tool is not suitable for power estimation of data path-oriented designs.
Resumo:
In the Standard EHE 08, for the first time, durability acquires the status of Limit State. Article 8 provides that the term Durability limit state, produced by physical and chemical actions, different loads and actions of structural analysis, which can degrade the concrete and reinforcement to unacceptable limits. The verification of this limit state can be done through a procedure set out in the provisions of the Standard. This procedure is based on the use of tables that, depending on the aggressiveness of the environment in which the structure is the concrete strength and the life of the project, setting the quality of the concrete cover (minimum thickness and maximum water cement ratio of concrete used) and the maximum crack width. This procedure, simple in its application, provides highly secure solutions. In addition, on Annex 9, the Standard EHE 08 offers models for testing the durability limit state in cases of corrosion of reinforcement due to carbonation of concrete or entry of chloride ions. The results obtained with these models are tighter than those obtained with the procedure of the articles. In this paper we use both methods in the study of reinforced concrete structures with potential problems of corrosion of reinforcement due to carbonation of concrete. Later checking the results obtained by both procedures. Results demonstrate that the use of the models listed in Annex 9 of Standard EHE 08 offer cheaper solutions than those obtained using the procedure of the articles
Resumo:
EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC [1] (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumentation). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS [2] (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 [3] and [4] (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution is presented, together with the most relevant results of the tests performed, while focusing in the benefits and limitations of the applied technologies.
Resumo:
The traditional ballast track structures are still being used in high speed railways lines with success, however technical problems or performance features have led to non-ballast track solution in some cases. A considerable maintenance work is needed for ballasted tracks due to the track deterioration. Therefore it is very important to understand the mechanism of track deterioration and to predict the track settlement or track irregularity growth rate in order to reduce track maintenance costs and enable new track structures to be designed. The objective of this work is to develop the most adequate and efficient models for calculation of dynamic traffic load effects on railways track infrastructure, and then evaluate the dynamic effect on the ballast track settlement, using a ballast track settlement prediction model, which consists of the vehicle/track dynamic model previously selected and a track settlement law. The calculations are based on dynamic finite element models with direct time integration, contact between wheel and rail and interaction with railway cars. A initial irregularity profile is used in the prediction model. The track settlement law is considered to be a function of number of loading cycles and the magnitude of the loading, which represents the long-term behavior of ballast settlement. The results obtained include the track irregularity growth and the contact force in the final interaction of numerical simulation
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.