33 resultados para FLUCTUATION THEOREM
Resumo:
Crohn s disease (CD) and ulcerative colitis (UC), collectively known as inflammatory bowel disease (IBD), are characterised by chronic inflammation of the gastrointestinal tract. IBD prevalence in Finland is approximately 3-4 per 1000 inhabitants with a peak incidence in adolescence. The symptoms of IBD include diarrhoea, abdominal pain, fever, and weight loss. The precise aetiology of IBD is unknown but interplay of environmental risk factors and immunologic changes trigger the disease in a genetically susceptible individual. Twin and family studies have provided strong evidence for genetic factors in IBD susceptibility, and genetic factors may be more prominent in CD than UC. The first CD susceptibility gene was identified in 2001. Three common mutations R702W, G908R, and 1007fs of the CARD15/NOD2 gene are shown to associate independently with CD but the magnitude of association varies between different populations. The present study aimed at identifying mutations and genetic variations in IBD susceptibility and candidate genes. In addition, correlation to phenotype was also assessed. One of the main objectives of this study was to evaluate the role of CARD15 in a Finnish CD cohort. 271 CD patients were studied for the three common mutations and the results showed a lower mutation frequency than in other Caucasian populations. Only 16% of the patients carried one of the three mutations. Ileal location as well as stricturing and penetrating behaviour of the disease were associated with occurrence of the mutations. The whole protein coding region of CARD15 was screened for possible Finnish founder mutations. In addition to several sequence variants, five novel mutations (R38M, W355X, P727L, W907R, and R1019X) were identified in five patients. Functional consequences of these novel variants were studied in vitro, and these studies demonstrated a profound impairment of MDP response. Investigation of CARD15 mutation frequency in healthy people across three continents showed a large geographic fluctuation. No simple correlation between mutation frequency and disease incidence was seen in populations studied. The occurrence of double mutant carriers in healthy controls suggested that the penetrance of risk alleles is low. Other main objectives aimed at identifying other genetic variations that are involved in the susceptibility to IBD. We investigated the most plausible IBD candidate genes including TRAF6, SLC22A4, SLC22A5, DLG5, TLR4, TNFRSF1A, ABCB1/MDR1, IL23R, and ATG16L1. The marker for a chromosome 5 risk haplotype and the rare HLA-DRB1*0103 allele were also studied. The study cohort consisted of 699 IBD patients (240 CD and 459 UC), of which 23% had a first-degree relative with IBD. Of the several candidate genes studied, IL23R was associated with CD susceptibility, and TNFRSF1A as well as the HLA-DRB1*0103 allele with UC susceptibility. IL23R variants also showed association with the stricturing phenotype and longer disease duration in CD patients. In addition, TNFRSF1A variants were more common among familial UC and ileocolonic CD. In conclusion, the common CARD15 mutations were shown to account for 16% of CD cases in Finland. Novel CARD15 variants identified in the present study are most likely disease-causing mutations, as judged by the results of in vitro studies. The present study also confirms the IL23R association with CD susceptibility and, in addition, TNFRSF1A and HLA-DRB1*0103 allele association with UC of specific clinical phenotypes.
Resumo:
The metabolic syndrome and type 1 diabetes are associated with brain alterations such as cognitive decline brain infarctions, atrophy, and white matter lesions. Despite the importance of these alterations, their pathomechanism is still poorly understood. This study was conducted to investigate brain glucose and metabolites in healthy individuals with an increased cardiovascular risk and in patients with type 1 diabetes in order to discover more information on the nature of the known brain alterations. We studied 43 20- to 45-year-old men. Study I compared two groups of non-diabetic men, one with an accumulation of cardiovascular risk factors and another without. Studies II to IV compared men with type 1 diabetes (duration of diabetes 6.7 ± 5.2 years, no microvascular complications) with non-diabetic men. Brain glucose, N-acetylaspartate (NAA), total creatine (tCr), choline, and myo-inositol (mI) were quantified with proton magnetic resonance spectroscopy in three cerebral regions: frontal cortex, frontal white matter, thalamus, and in cerebellar white matter. Data collection was performed for all participants during fasting glycemia and in a subgroup (Studies III and IV), also during a hyperglycemic clamp that increased plasma glucose concentration by 12 mmol/l. In non-diabetic men, the brain glucose concentration correlated linearly with plasma glucose concentration. The cardiovascular risk group (Study I) had a 13% higher plasma glucose concentration than the control group, but no difference in thalamic glucose content. The risk group thus had lower thalamic glucose content than expected. They also had 17% increased tCr (marker of oxidative metabolism). In the control group, tCr correlated with thalamic glucose content, but in the risk group, tCr correlated instead with fasting plasma glucose and 2-h plasma glucose concentration in the oral glucose tolerance test. Risk factors of the metabolic syndrome, most importantly insulin resistance, may thus influence brain metabolism. During fasting glycemia (Study II), regional variation in the cerebral glucose levels appeared in the non-diabetic subjects but not in those with diabetes. In diabetic patients, excess glucose had accumulated predominantly in the white matter where the metabolite alterations were also the most pronounced. Compared to the controls values, the white matter NAA (marker of neuronal metabolism) was 6% lower and mI (glia cell marker) 20% higher. Hyperglycemia is therefore a potent risk factor for diabetic brain disease and the metabolic brain alterations may appear even before any peripheral microvascular complications are detectable. During acute hyperglycemia (Study III), the increase in cerebral glucose content in the patients with type 1 diabetes was, dependent on brain region, between 1.1 and 2.0 mmol/l. An every-day hyperglycemic episode in a diabetic patient may therefore as much as double brain glucose concentration. While chronic hyperglycemia had led to accumulation of glucose in the white matter, acute hyperglycemia burdened predominantly the gray matter. Acute hyperglycemia also revealed that chronic fluctuation in blood glucose may be associated with alterations in glucose uptake or in metabolism in the thalamus. The cerebellar white matter appeared very differently from the cerebral (Study IV). In the non-diabetic men it contained twice as much glucose as the cerebrum. Diabetes had altered neither its glucose content nor the brain metabolites. The cerebellum seems therefore more resistant to the effects of hyperglycemia than is the cerebrum.
Resumo:
At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.
Resumo:
Background
How new forms arise in nature has engaged evolutionary biologists since Darwin's seminal treatise on the origin of species. Transposable elements (TEs) may be among the most important internal sources for intraspecific variability. Thus, we aimed to explore the temporal dynamics of several TEs in individual genotypes from a small, marginal population of Aegilops speltoides. A diploid cross-pollinated grass species, it is a wild relative of the various wheat species known for their large genome sizes contributed by an extraordinary number of TEs, particularly long terminal repeat (LTR) retrotransposons. The population is characterized by high heteromorphy and possesses a wide spectrum of chromosomal abnormalities including supernumerary chromosomes, heterozygosity for translocations, and variability in the chromosomal position or number of 45S and 5S ribosomal DNA (rDNA) sites. We propose that variability on the morphological and chromosomal levels may be linked to variability at the molecular level and particularly in TE proliferation.
Results
Significant temporal fluctuation in the copy number of TEs was detected when processes that take place in small, marginal populations were simulated. It is known that under critical external conditions, outcrossing plants very often transit to self-pollination. Thus, three morphologically different genotypes with chromosomal aberrations were taken from a wild population of Ae. speltoides, and the dynamics of the TE complex traced through three rounds of selfing. It was discovered that: (i) various families of TEs vary tremendously in copy number between individuals from the same population and the selfed progenies; (ii) the fluctuations in copy number are TE-family specific; (iii) there is a great difference in TE copy number expansion or contraction between gametophytes and sporophytes; and (iv) a small percentage of TEs that increase in copy number can actually insert at novel locations and could serve as a bona fide mutagen.
Conclusions
We hypothesize that TE dynamics could promote or intensify morphological and karyotypical changes, some of which may be potentially important for the process of microevolution, and allow species with plastic genomes to survive as new forms or even species in times of rapid climatic change.
Resumo:
Prolyl oligopeptidase (POP, prolyl endopeptidase, EC 3.4.21.26) is a serine-type peptidase (family S9 of clan SC) hydrolyzing peptides shorter than 30 amino acids. POP has been found in various mammalian and bacterial sources and it is widely distributed throughout different organisms. In human and rat, POP enzyme activity has been detected in most tissues, with the highest activity found mostly in the brain. POP has gained scientific interest as being involved in the hydrolyzis of many bioactive peptides connected with learning and memory functions, and also with neurodegenerative disorders. In drug or lesion induced amnesia models and in aged rodents, POP inhibitors have been able to revert memory loss. POP may have a fuction in IP3 signaling and it may be a possible target of mood stabilizing substances. POP may also have a role in protein trafficking, sorting and secretion. The role of POP during ontogeny has not yet been resolved. POP enzyme activity and expression have shown fluctuation during development. Specially high enzyme activities have been measured in the brain during early development. Reduced neuronal proliferation and differentation in presence of POP inhibitor have been reported. Nuclear POP has been observed in proliferating peripheral tissues and in cell cultures at the early stage of development. Also, POP coding mRNA is abundantly expressed during brain ontogeny and the highest levels of expression are associated with proliferative germinal matrices. This observation indicates a special role for POP in the regulation of neurogenesis during development. For the experimental part, the study was undertaken to investigate the expression and distribution of POP protein and enzymatic activity of POP in developing rat brain (from embryonic day 14 to post natal day 7) using immunohistochemistry, POP enzyme activity measurements and western blot-analysis. The aim was also to find in vivo confirmation of the nuclear colocalization of POP during early brain ontogeny. For immunohistochemistry, cryosections from the brains of the fetuses/rats were made and stained using specific antibody for POP and fluorescent markers for POP and nuclei. The enzyme activity assay was based on the fluorescence of 7- amino-4-methylcoumarin (AMC) generated from the fluorogenic substrate succinyl-glycyl-prolyl-7-amino-4-methylcoumarin (Suc-Gly-Pro-AMC) by POP. The amounts of POP protein and the specifity of POP antibody in rat embryos was confirmed by western blot analysis. We observed that enzymatic activity of POP is highest at embryonic day 18 while the protein amounts reach their peak at birth. POP was widely present throughout the developmental stages from embryonic day 14 to parturition day, although the POP-immunoreactivity varied abundantly. At embryonic days 14 and 18 notably amounts of POP was distributed at proliferative germinal zones. Furthermore, POP was located in the nucleus early in the development but is transferred to cytosol before birth. At P0 and P7 the POP-immunoreactivity was also widely observed, but the amount of POP was notably reduced at P7. POP was present in cytosol and in intercellular space, but no nuclear POP was observed. These findings support the idea of POP being involved in specific brain functions, such as neuronal proliferation and differentation. Our results in vivo confirm the previous cell culture results supporting the role of POP in neurogenesis. Moreover, an inconsistency of POP protein amounts and enzymatic activity late in the development suggests a strong regulation of POP activity and a possible non-hydrolytic role at that stage.
Resumo:
We report the observation of the bottom, doubly-strange baryon Omega^-_b through the decay chain Omega^-_b -> J/psi Omega^-, where J/psi -> mu^+ mu^-, Omega^- -> Lambda K^-, and Lambda -> p pi^-, using 4.2 fb^{-1} of data from p\bar p collisions at sqrt{s}=1.96 TeV, and recorded with the Collider Detector at Fermilab. A signal is observed whose probability of arising from a background fluctuation is 4.0 * 10^{-8}, or 5.5 Gaussian standard deviations. The Omega^-_b mass is measured to be 6054.4 +/- 6.8 (stat.) +/- 0.9 (syst.) MeV/c^2. The lifetime of the Omega^-_b baryon is measured to be 1.13^{+0.53}_{-0.40}(stat.) +/- 0.02(syst.)$ ps. In addition, for the \Xi^-_b baryon we measure a mass of 5790.9 +/- 2.6(stat.) +/- 0.8(syst.) MeV/c^2 and a lifetime of 1.56^{+0.27}_{-0.25}(stat.) +/-0.02(syst.) ps. Under the assumption that the \Xi_b^- and \Omega_b^- are produced with similar kinematic distributions to the \Lambda^0_b baryon, we find sigma(Xi_b^-) B(Xi_b^- -> J/psi Xi^-)}/ sigma(Lambda^0_b) B(Lambda^0_b -> J/psi Lambda)} = 0.167^{+0.037}_{-0.025}(stat.) +/-0.012(syst.) and sigma(Omega_b^-) B(Omega_b^- -> J/psi Omega^-)/ sigma(Lambda^0_b) B(Lambda^0_b -> J/psi Lambda)} = 0.045^{+0.017}_{-0.012}(stat.) +/- 0.004(syst.) for baryons produced with transverse momentum in the range of 6-20 GeV/c.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
After Gödel's incompleteness theorems and the collapse of Hilbert's programme Gerhard Gentzen continued the quest for consistency proofs of Peano arithmetic. He considered a finitistic or constructive proof still possible and necessary for the foundations of mathematics. For a proof to be meaningful, the principles relied on should be considered more reliable than the doubtful elements of the theory concerned. He worked out a total of four proofs between 1934 and 1939. This thesis examines the consistency proofs for arithmetic by Gentzen from different angles. The consistency of Heyting arithmetic is shown both in a sequent calculus notation and in natural deduction. The former proof includes a cut elimination theorem for the calculus and a syntactical study of the purely arithmetical part of the system. The latter consistency proof in standard natural deduction has been an open problem since the publication of Gentzen's proofs. The solution to this problem for an intuitionistic calculus is based on a normalization proof by Howard. The proof is performed in the manner of Gentzen, by giving a reduction procedure for derivations of falsity. In contrast to Gentzen's proof, the procedure contains a vector assignment. The reduction reduces the first component of the vector and this component can be interpreted as an ordinal less than epsilon_0, thus ordering the derivations by complexity and proving termination of the process.
Resumo:
The incidence of type 2 diabetes has increased rapidly worldwide. Obesity is one of the most important modifiable risk factors of type 2 diabetes: weight gain increases and weight loss decreases the risk. However, the effects of weight fluctuation are unclear. Reactive oxygen species are presumably part of the complicated mechanism for the development of insulin resistance and beta-cell destruction in the pancreas. The association of antioxidants with the risk of incident type 2 diabetes has been studied in longitudinal prospective human studies, but so far there is no clear conclusion about protective effect of dietary or of supplementary antioxidants on diabetes risk. The present study examined 1) weight change and fluctuation as risk factors for incident type 2 diabetes; 2) the association of baseline serum alpha-tocopherol or beta-carotene concentration and dietary intake of antioxidants with the risk of type 2 diabetes; 3) the effect of supplementation with alpha-tocopherol or beta-carotene on the risk of incident type 2 diabetes; and on macrovascular complications and mortality among type 2 diabetics. This investigation was part of the Alpha-Tocopherol, Beta-Carotene Cancer Prevention (ATBC) Study, a randomized, double-blind, placebo-controlled prevention trial, which has undertaken to examine the effect of alpha-tocopherol and beta-carotene supplementation on the development of lung cancer, other cancers, and cardiovascular diseases in male smokers aged 50-69 years at baseline. Participants were assigned to receive either 50 mg alpha-tocopherol, 20mg beta-carotene, both, or placebo daily in a 2 x 2 factorial design experiment during 1985-1993. Cases of incident diabetes were identified through a nationwide register of drug reimbursements of the Social Insurance Institution. At baseline 1700 men had a history of diabetes. Among those (n = 27 379) with no diabetes at baseline 305 new cases of type 2 diabetes were recognized during the intervention period and 705 during the whole follow-up to 12.5 years. Weight gain and weight fluctuation measured over a three year period were independent risk factors for subsequent incident type 2 diabetes. Relative risk (RR) was 1.77 (95% confidence interval [CI] 1.44-2.17) for weight gain of at least 4 kg compared to those with a weight change of less than 4 kg. The RR in the highest weight fluctuation quintile compared to the lowest was 1.64 (95% CI 1.24-2.17). Dietary tocopherols and tocotrienols as well as dietary carotenoids, flavonols, flavones and vitamin C were not associated with the risk of type 2 diabetes. Baseline serum alpha-tocopherol and beta-carotene concentrations were not associated with the risk of incident diabetes. Neither alpha-tocopherol nor beta-carotene supplementation affected the risk of diabetes. The relative risks for participants who received alpha-tocopherol compared with nonrecipients and for participants who received beta-carotene compared with nonrecipients were 0.92 (95% CI 0.79-1.07) and 0.99 (95% CI 0.85-1.15), respectively. Furthermore, alpha-tocopherol or beta-carotene supplementation did not affect the risk of macrovascular complications or mortality of diabetic subjects during the 19 years follow-up time. In conclusion, in this study of older middle-aged male smokers, weight gain and weight fluctuation were independent risk factors for type 2 diabetes. Intake of antioxidants or serum alpha-tocopherol or beta-carotene concentrations were not associated with the risk of type 2 diabetes. Supplementation with of alpha-tocopherol or beta-carotene did not prevent type 2 diabetes. Neither did they prevent macrovascular complications, or mortality among diabetic subjects.
Resumo:
In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.
Resumo:
The relationship between site characteristics and understorey vegetation composition was analysed with quantitative methods, especially from the viewpoint of site quality estimation. Theoretical models were applied to an empirical data set collected from the upland forests of southern Finland comprising 104 sites dominated by Scots pine (Pinus sylvestris L.), and 165 sites dominated by Norway spruce (Picea abies (L.) Karsten). Site index H100 was used as an independent measure of site quality. A new model for the estimation of site quality at sites with a known understorey vegetation composition was introduced. It is based on the application of Bayes' theorem to the density function of site quality within the study area combined with the species-specific presence-absence response curves. The resulting posterior probability density function may be used for calculating an estimate for the site variable. Using this method, a jackknife estimate of site index H100 was calculated separately for pine- and spruce-dominated sites. The results indicated that the cross-validation root mean squared error (RMSEcv) of the estimates improved from 2.98 m down to 2.34 m relative to the "null" model (standard deviation of the sample distribution) in pine-dominated forests. In spruce-dominated forests RMSEcv decreased from 3.94 m down to 3.16 m. In order to assess these results, four other estimation methods based on understorey vegetation composition were applied to the same data set. The results showed that none of the methods was clearly superior to the others. In pine-dominated forests, RMSEcv varied between 2.34 and 2.47 m, and the corresponding range for spruce-dominated forests was from 3.13 to 3.57 m.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Tutkielman tarkoituksena oli soveltaa toistetun pelin teoria- ja empiriapohjaa suomalaiseen tutkimusaineistoon. Kartellin toimintadynamiikka on mallinnettu peliteorian osa-alueen, toistetun pelin kentäksi. Toistetussa pelissä samaa, kerran pelattua peliä pelataan useita kierroksia. Äärettömästi toistetusta pelistä muodostuu toistetun pelin yleinen teoria (The Folk Theorem), jossa jokaisella pelaajalla on yksilöllisesti rationaalinen käytössykli. Toisen pelaajan kanssa tehty yhteistyö kasvattaa pelaajan käytössykliltä kertyvää kokonaishyötyä. Kartellitutkimuksessa ei voi ohittaa oikeustieteellistä näkökulmaa, joten sekin on tiivistetysti mukana esityksessä. Äänettömässä tai implisiittisessä kartellissa ( tacit collusion ) ei avoimen kartellin tavoin ole osapuolten välistä kommunikointia, mutta sen lopputulos on sama. Tästä syystä äänetön kartelli on yhdenmukaistettuna käytöksenä kielletty. Koska myös tunnusmerkit ovat osin samat, kartellitutkimus on saanut arvokasta mittausaineistoa paljastuneiden kartellien käytöksestä. Pelkkään hintatiedostoonkin perustuvalla tutkimuksella on vankka teoreettinen ja empiirinen pohja. Oikeuskirjallisuudessa ja käytännössä hintayhteneväisyyden on yhdessä muiden tunnusmerkkien kanssa katsottu olevan indisio kartellista. Bensiinin vähittäismyyntimarkkinat ovat rakenteellisesti otollinen kenttä toistetulle pelille. Tutkielman empiirisessä osuudessa kohteena olivat pääkaupunkiseudun bensiinin vähittäismyyntimarkkinat ja tiedosto sisälsi otoksia hinta-aikasarjoista ajalta 1.8.2004 - 30.6.2005 kaikkiaan 116:ltä jakeluasemalta Espoosta, Helsingistä ja Vantaalta. Tutkimusmenetelmänä oli toistettujen mittausten varianssianalyysi post hoc-vertailuin. Tilastollisesti merkitsevä hinnoitteluyhtenevyys lähellä sijaitsevien asemien kesken löytyi 47 asemalta, ja näin ollen näillä asemilla on yksi kartellin tunnusmerkeistä. Hinnoitteluyhtenevyyden omaavat asemat muodostivat liikenneyhteyksien mukaan jaetuilla kilpailualueillaan ryhmittymiä ja kaikkiaan tällaisia yhtenevästi hinnoittelevia ryhmittymiä oli 21. Näistä ryhmittymistä 9 oli ns. sekapareja eli osapuolina olivat kylmäasema ja liikenneasema. Useimmissa tapauksissa oli kyseessä alueensa kalleimmin hinnoitteleva kylmäasema. Tutkielman tärkeimmät lähteet: Abrantes-Metz, Rosa M. – Froeb, Luke M. – Geweke, John F. – Taylor, Cristopher T. (2005): A Variance screen for collusion. Working paper no. 275, Bureau of economics, Federal Trade Commission, Washington DC 20580. Dutta, Prajit K. (1999): Strategies and Games, Theory and Practice. The MIT Press, Cambridge, Massachusetts, London, England. Harrington, Joseph E. (2004): Detecting cartels. Working paper. John Hopkins University. Ivaldi, Marc – Jullien, Bruno – Rey, Patric – Seabright, Paul – Tirole, Jean (2003): The Economics of Tacit Collusion. EU:n komission kilpailun pääosaston julkaisu. Phlips, Louis (1996): On the detection of collusion and predation. European Economic Review 40 (1996), 495–510.
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.