20 resultados para Statistical Convergence
Resumo:
Growth and Convergence: The Case of China Since the initiation of economic reforms in 1978, China has become one of the world’s fast-growing economies. The rapid growth, however, has not been shared equally across the different regions in China. The prominent feature of substantial differences in incomes and growth rates across the different Chinese regions has attracted the attention of many researchers. This book focuses on issues related to economic growth and convergence across the Chinese regions over the past three decades. The book has eight chapters. Apart from an introduction chapter and a concluding chapter, all the other chapters each deal with some certain aspects of the central issue of regional growth and convergence across China over the past three decades. The whole book is organized as follows. Chapter 1 provides an introduction to the basic issues involved in this book. Chapter 2 tests economic growth and convergence across 31 Chinese provinces during 1981-2005, based on the theoretical framework of the Solow growth model. Chapter 3 investigates the relationship between openness to foreign economic activities, such as foreign trade and foreign direct investment, and the regional economic growth in the case of China during 1981-2005. Chapter 4, based on data of 31 Chinese provinces over the period 1980-2004, presents new evidence on the effects of structural shocks and structural transformation on growth and convergence among the Chinese regions. Chapter 5, by building up an empirical model that takes account of different potential effects of foreign direct investment, focuses on the impacts of foreign direct investment on China’s regional economic performance and growth. Chapter 6 reconsiders the growth and convergence problem of the Chinese regions in an alternative theoretical framework with endogenous saving behavior and capital mobility across regions. Chapter 7, by building up a theoretical model concerning comparative advantage and transaction efficiency, focuses on one of the potential mechanisms through which China achieves its fast economic growth over the past few decades. Chapter 8 concludes the book by summarizing the results from the previous chapters and suggesting directions for further studies.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
This study investigates the process of producing interactivity in a converged media environment. The study asks whether more media convergence equals more interactivity. The research object is approached through semi-structured interviews of prominent decision makers within the Finnish media. The main focus of the study are the three big ones of the traditional media, radio, television and the printing press, and their ability to adapt to the changing environment. The study develops theoretical models for the analysis of interactive features and convergence. Case-studies are formed from the interview data and they are evaluated against the models. As a result the cases arc plotted and compared on a four-fold table. The cases are Radio Rock, NRJ, Biu Brother, Television Chat, Olivia and Sanoma News. It is found out that the theoretical models can accurately forecast the results of the case studies. The models are also able to distinguish different aspects of both interactivity and convergence so that a case, which at a first glance seems not to be very interactive is in the end found out to receive second highest scores on the analysis. The highest scores are received by Big Brother and Sanoma News. Through the theory and the analysis of the research data it is found out that the concepts of interactivity and convergence arc intimately intertwined and very hard in many cases to separate from each other. Hence the answer to the main question of this study is yes, convergence does promote interactivity and audience participation. The main theoretical background for the analysis of interactivity follows the work of Came Fleeter, Spiro Kiousis and Sally McMillan. Heeler's six-dimensional definition of interactivity is used as the basis for operationalizing interactivity. The actor-network theory is used as the main theoretical framework to analyze convergence. The definition and operationalization of the actor-network theory into a model of convergence follows the work of Michel Callon. Bruno Latour and especially John Law and Felix Stalder.