871 resultados para panel data analysis
Resumo:
AbstractObjective:To evaluate the association between Hashimoto's thyroiditis (HT) and papillary thyroid carcinoma (PTC).Materials and Methods:The patients were evaluated by ultrasonography-guided fine needle aspiration cytology. Typical cytopathological aspects and/or classical histopathological findings were taken into consideration in the diagnosis of HT, and only histopathological results were considered in the diagnosis of PTC.Results:Among 1,049 patients with multi- or uninodular goiter (903 women and 146 men), 173 (16.5%) had cytopathological features of thyroiditis. Thirty-three (67.4%) out of the 49 operated patients had PTC, 9 (27.3%) of them with histopathological features of HT. Five (31.3%) out of the 16 patients with non-malignant disease also had HT. In the groups with HT, PTC, and PCT+HT, the female prevalence rate was 100%, 91.6%, and 77.8%, respectively. Mean age was 41.5, 43.3, and 48.5 years, respectively. No association was observed between the two diseases in the present study where HT occurred in 31.1% of the benign cases and in 27.3% of malignant cases (p = 0.8).Conclusion:In spite of the absence of association between HT and PCT, the possibility of malignancy in HT should always be considered because of the coexistence of the two diseases already reported in the literature.
Resumo:
While general equilibrium theories of trade stress the role of third-country effects, little work has been done in the empirical foreign direct investment (FDI) literature to test such spatial linkages. This paper aims to provide further insights into long-run determinants of Spanish FDI by considering not only bilateral but also spatially weighted third-country determinants. The few studies carried out so far have focused on FDI flows in a limited number of countries. However, Spanish FDI outflows have risen dramatically since 1995 and today account for a substantial part of global FDI. Therefore, we estimate recently developed Spatial Panel Data models by Maximum Likelihood (ML) procedures for Spanish outflows (1993-2004) to top-50 host countries. After controlling for unobservable effects, we find that spatial interdependence matters and provide evidence consistent with New Economic Geography (NEG) theories of agglomeration, mainly due to complex (vertical) FDI motivations. Spatial Error Models estimations also provide illuminating results regarding the transmission mechanism of shocks.
Resumo:
The agricultural sector has always been characterized by a predominance of small firms. International competition and the consequent need for restraining costs are permanent challenges for farms. This paper performs an empirical investigation of cost behavior in agriculture using panel data analysis. Our results show that transactions caused by complexity influence farm costs with opposite effects for specific and indirect costs. While transactions allow economies of scale in specific costs, they significantly increase indirect costs. However, the main driver for farm costs is volume. In addition, important differences exist for small and big farms, since transactional variables significantly influence the former but not the latter. While sophisticated management tools, such ABC, could provide only limited complementary useful information but no essential allocation bases for farms, they seem inappropriate for small farms
Resumo:
The agricultural sector has always been characterized by a predominance of small firms. International competition and the consequent need for restraining costs are permanent challenges for farms. This paper performs an empirical investigation of cost behavior in agriculture using panel data analysis. Our results show that transactions caused by complexity influence farm costs with opposite effects for specific and indirect costs. While transactions allow economies of scale in specific costs, they significantly increase indirect costs. However, the main driver for farm costs is volume. In addition, important differences exist for small and big farms, since transactional variables significantly influence the former but not the latter. While sophisticated management tools, such ABC, could provide only limited complementary useful information but no essential allocation bases for farms, they seem inappropriate for small farms
Resumo:
The use of tolls is being widespread around the world. Its ability to fund infrastructure projects and to solve budget constraints have been the main rationale behind its renewed interest. However, less attention has been payed to the safety effects derived from this policy in a moment of increasing concern on road fatalities. Pricing best infrastructures shifts some drivers onto worse alternative roads usually not prepared to receive high traffic in comparable safety standards. In this paper we provide evidence of the existence of this perverse consequence by using an international European panel in a two way fixed effects estimation.
Resumo:
First application of compositional data analysis techniques to Australian election data
Resumo:
In any discipline, where uncertainty and variability are present, it is important to haveprinciples which are accepted as inviolate and which should therefore drive statisticalmodelling, statistical analysis of data and any inferences from such an analysis.Despite the fact that two such principles have existed over the last two decades andfrom these a sensible, meaningful methodology has been developed for the statisticalanalysis of compositional data, the application of inappropriate and/or meaninglessmethods persists in many areas of application. This paper identifies at least tencommon fallacies and confusions in compositional data analysis with illustrativeexamples and provides readers with necessary, and hopefully sufficient, arguments topersuade the culprits why and how they should amend their ways
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
GLUT4 protein expression in white adipose tissue (WAT) and skeletal muscle (SM) was investigated in 2-month-old, 12-month-old spontaneously obese or 12-month-old calorie-restricted lean Wistar rats, by considering different parameters of analysis, such as tissue and body weight, and total protein yield of the tissue. In WAT, a ~70% decrease was observed in plasma membrane and microsomal GLUT4 protein, expressed as µg protein or g tissue, in both 12-month-old obese and 12-month-old lean rats compared to 2-month-old rats. However, when plasma membrane and microsomal GLUT4 tissue contents were expressed as g body weight, they were the same. In SM, GLUT4 protein content, expressed as µg protein, was similar in 2-month-old and 12-month-old obese rats, whereas it was reduced in 12-month-old obese rats, when expressed as g tissue or g body weight, which may play an important role in insulin resistance. Weight loss did not change the SM GLUT4 content. These results show that altered insulin sensitivity is accompanied by modulation of GLUT4 protein expression. However, the true role of WAT and SM GLUT4 contents in whole-body or tissue insulin sensitivity should be determined considering not only GLUT4 protein expression, but also the strong morphostructural changes in these tissues, which require different types of data analysis.
Resumo:
This study sought to evaluate the acceptance of "dulce de leche" with coffee and whey. The results were analyzed through response surface, ANOVA, test of averages, histograms, and preference map correlating the global impression data with results of physical, physiochemical and sensory analysis. The response surface methodology, by itself, was not enough to find the best formulation. For ANOVA, test of averages, and preference map it was observed that the consumers' favorite "dulce de leche" were those of formulation 1 (10% whey and 1% coffee) and 2 (30% whey and 1% coffee), followed by formulation 9 (20% whey and 1.25% coffee). The acceptance of samples 1 and 2 was influenced by the higher acceptability in relation to the flavor and for presenting higher pH, L*, and b* values. It was observed that samples 1 and 2 presented higher purchase approval score and higher percentages of responses for the 'ideal' category in terms of sweetness and coffee flavor. It was found that consumers preferred the samples with low concentrations of coffee independent of the concentration of whey thus enabling the use of whey and coffee in the manufacture of dulce de leche, obtaining a new product.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
This research concerns the Urban Living Idea Contest conducted by Creator Space™ of BASF SE during its 150th anniversary in 2015. The main objectives of the thesis are to provide a comprehensive analysis of the Urban Living Idea Contest (ULIC) and propose a number of improvement suggestions for future years. More than 4,000 data points were collected and analyzed to investigate the functionality of different elements of the contest. Furthermore, a set of improvement suggestions were proposed to BASF SE. Novelty of this thesis lies in the data collection and the original analysis of the contest, which identified its critical elements, as well as the areas that could be improved. The author of this research was a member of the organizing team and involved in the decision making process from the beginning until the end of the ULIC.
Resumo:
In this paper, we discuss Conceptual Knowledge Discovery in Databases (CKDD) in its connection with Data Analysis. Our approach is based on Formal Concept Analysis, a mathematical theory which has been developed and proven useful during the last 20 years. Formal Concept Analysis has led to a theory of conceptual information systems which has been applied by using the management system TOSCANA in a wide range of domains. In this paper, we use such an application in database marketing to demonstrate how methods and procedures of CKDD can be applied in Data Analysis. In particular, we show the interplay and integration of data mining and data analysis techniques based on Formal Concept Analysis. The main concern of this paper is to explain how the transition from data to knowledge can be supported by a TOSCANA system. To clarify the transition steps we discuss their correspondence to the five levels of knowledge representation established by R. Brachman and to the steps of empirically grounded theory building proposed by A. Strauss and J. Corbin.
Resumo:
These notes have been prepared as support to a short course on compositional data analysis. Their aim is to transmit the basic concepts and skills for simple applications, thus setting the premises for more advanced projects