28 resultados para Distorted probabilities
Resumo:
We report the first measurement of the cross section for Z boson pair production at a hadron collider. This result is based on a data sample corresponding to 1.9 fb-1 of integrated luminosity from ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. In the llll channel, we observe three ZZ candidates with an expected background of 0.096^{+0.092}_{-0.063} events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of llll and llnunu channels, we observe an excess of events with a probability of $5.1\times 10^{-6}$ to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(ppbar -> ZZ) = 1.4^{+0.7}_{-0.6} (stat.+syst.) pb, consistent with the standard model expectation.
Resumo:
There is much literature developing theories when and where earnings management occurs. Among the several possible motives driving earnings management behaviour in firms, this thesis focuses on motives that aim to influence the valuation of the firm. Earnings management that makes the firm look better than it really is may result in disappointment for the single investor and potentially leads to a welfare loss in society when the resource allocation is distorted. A more specific knowledge of the occurrence of earnings management supposedly increases the awareness of the investor and thus leads to better investments and increased welfare. This thesis contributes to the literature by increasing the knowledge as to where and when earnings management is likely to occur. More specifically, essay 1 adds to existing research connecting earnings management to IPOs and increases the knowledge in arguing that the tendency to manage earnings differs between the IPOs. Evidence is found that entrepreneur owned IPOs are more likely to be earnings managers than the institutionally owned ones. Essay 2 considers the reliability of quarterly earnings reports that precedes insider selling binges. The essay contributes by suggesting that earnings management is likely to occur before high insider selling. Essay 3 examines the widely studied phenomenon of income smoothing and investigates if income smoothing can be explained with proxies for information asymmetry. The essay argues that smoothing is more pervasive in private and smaller firms.
Resumo:
In this paper, we re-examine the relationship between overweight and labour market success, using indicators of individual body composition along with BMI (Body Mass Index). We use the dataset from Finland in which weight, height, fat mass and waist circumference are not self-reported, but obtained as part of the overall health examination. We find that waist circumference, but not weight or fat mass, has a negative effect on wages for women, whereas all measures of obesity have negative effects on women’s employment probabilities. For men, the only obesity measure that is significant for men’s employment probabilities is fat mass. One interpretation of our findings is that the negative wage effects of overweight on wages run through the discrimination channel, but that the negative effects of overweight on employment have more to do with ill health. All in all, measures of body composition provide a more refined picture about the effects of obesity on wages and employment.
Resumo:
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
Resumo:
The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.
Resumo:
Vegetation maps and bioclimatic zone classifications communicate the vegetation of an area and are used to explain how the environment regulates the occurrence of plants on large scales. Many practises and methods for dividing the world’s vegetation into smaller entities have been presented. Climatic parameters, floristic characteristics, or edaphic features have been relied upon as decisive factors, and plant species have been used as indicators for vegetation types or zones. Systems depicting vegetation patterns that mainly reflect climatic variation are termed ‘bioclimatic’ vegetation maps. Based on these it has been judged logical to deduce that plants moved between corresponding bioclimatic areas should thrive in the target location, whereas plants moved from a different zone should languish. This principle is routinely applied in forestry and horticulture but actual tests of the validity of bioclimatic maps in this sense seem scanty. In this study I tested the Finnish bioclimatic vegetation zone system (BZS). Relying on the plant collection of Helsinki University Botanic Garden’s Kumpula collection, which according to the BZS is situated at the northern limit of the hemiboreal zone, I aimed to test how the plants’ survival depends on their provenance. My expectation was that plants from the hemiboreal or southern boreal zones should do best in Kumpula, whereas plants from more southern and more northern zones should show progressively lower survival probabilities. I estimated probability of survival using collection database information of plant accessions of known wild origin grown in Kumpula since the mid 1990s, and logistic regression models. The total number of accessions I included in the analyses was 494. Because of problems with some accessions I chose to separately analyse a subset of the complete data, which included 379 accessions. I also analysed different growth forms separately in order to identify differences in probability of survival due to different life strategies. In most analyses accessions of temperate and hemiarctic origin showed lower survival probability than those originating from any of the boreal subzones, which among them exhibited rather evenly high probabilities. Exceptionally mild and wet winters during the study period may have killed off hemiarctic plants. Some winters may have been too harsh for temperate accessions. Trees behaved differently: they showed an almost steadily increasing survival probability from temperate to northern boreal origins. Various factors that could not be controlled for may have affected the results, some of which were difficult to interpret. This was the case in particular with herbs, for which the reliability of the analysis suffered because of difficulties in managing their curatorial data. In all, the results gave some support to the BZS, and especially its hierarchical zonation. However, I question the validity of the formulation of the hypothesis I tested since it may not be entirely justified by the BZS, which was designed for intercontinental comparison of vegetation zones, but not specifically for transcontinental provenance trials. I conclude that botanic gardens should pay due attention to information management and curational practices to ensure the widest possible applicability of their plant collections.
Resumo:
Questions of the small size of non-industrial private forest (NIPF) holdings in Finland are considered and factors affecting their partitioning are analyzed. This work arises out of Finnish forest policy statements in which the small average size of holdings has been seen to have a negative influence on the economics of forestry. A survey of the literature indicates that the size of holdings is an important factor determining the costs of logging and silvicultural operations, while its influence on the timber supply is slight. The empirical data are based on a sample of 314 holdings collected by interviewing forest owners in the years 1980-86. In 1990-91 the same holdings were resurveyed by means of a postal inquiry and partly by interviewing forest owners. The principal objective in compiling the data is to assist in quantifying ownership factors that influence partitioning among different kinds of NIPF holdings. Thus the mechanism of partitioning were described and a maximum likelihood logistic regression model was constructed using seven independent holding and ownership variables. One out of four holdings had undergone partitioning in conjunction with a change in ownership, one fifth among family owned holdings and nearly a half among jointly owned holdings. The results of the logistic regression model indicate, for instance, that the odds on partitioning is about three times greater for jointly owned holdings than for family owned ones. Also, the probabilities of partitioning were estimated and the impact of independent dichotomous variables on the probability of partitioning ranged between 0.02 and 0.10. The low value of the Hosmer-Lemeshow test statistic indicates a good fit of the model and the rate of correct classification was estimated to be 88 per cent with a cutoff point of 0.5. The average size of holdings undergoing ownership changes decreased from 29.9 ha to 28.7 ha over the approximate interval 1983-90. In addition, the transition probability matrix showed that the trends towards smaller size categories mostly involved in the small size categories, less than 20 ha. The results of the study can be used in considering the effects of the small size of holdings for forestry and if the purpose is to influence partitioning through forest or rural policy.
Resumo:
Finland witnessed a surge in crime news reporting during the 1990s. At the same time, there was a significant rise in the levels of fear of crime reported by surveys. This research examines whether and how the two phenomena: news media and fear of violence were associated with each other. The dissertation consists of five sub-studies and a summary article. The first sub-study is a review of crime reporting trends in Finland, in which I have reviewed prior research and used existing Finnish datasets on media contents and crime news media exposure. The second study examines the association between crime media consumption and fear of crime when personal and vicarious victimization experiences have been held constant. Apart from analyzing the impact of crime news consumption on fear, media effects on general social trust are analyzed in the third sub-study. In the fourth sub-study I have analyzed the contents of the Finnish Poliisi-TV programme and compared the consistency of the picture of violent crime between official data sources and the programme. In the fifth and final sub-study, the victim narratives of Poliisi-TV s violence news contents have been analyzed. The research provides a series of results which are unprecedented in Finland. First, it observes that as in many other countries, the quantity of crime news supply has increased quite markedly in Finland. Second, it verifies that exposure to crime news is related to being worried about violent victimization and avoidance behaviour. Third, it documents that exposure to TV crime reality-programming is associated with reduced social trust among Finnish adolescents. Fourth, the analysis of Poliisi-TV shows that it transmits a distorted view of crime when contrasted with primary data sources on crime, but that this distortion is not as big as could be expected from international research findings and epochal theories of sociology. Fifth, the portrayals of violence victims in Poliisi-TV do not fit the traditional ideal types of victims that are usually seen to dominate crime media. The fact that the victims of violence in Poliisi-TV are ordinary people represents a wider development of the changing significance of the crime victim in Finland. The research concludes that although the media most likely did have an effect on the rising public fears in the 1990s, the mechanism was not as straight forward as has often been claimed. It is likely that there are other factors in the fear-media equation that are affecting both fear levels and crime reporting and that these factors are interactive in nature. Finally, the research calls for a re-orientation of media criminology and suggests more emphasis on the positive implications of crime in the media. Keywords: crime, media, fear of crime, violence, victimization, news
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
The work presented here has focused on the role of cation-chloride cotransporters (CCCs) in (1) the regulation of intracellular chloride concentration within postsynaptic neurons and (2) on the consequent effects on the actions of the neurotransmitter gamma-aminobutyric acid (GABA) mediated by GABAA receptors (GABAARs) during development and in pathophysiological conditions such as epilepsy. In addition, (3) we found that a member of the CCC family, the K-Cl cotransporter isoform 2 (KCC2), has a structural role in the development of dendritic spines during the differentiation of pyramidal neurons. Despite the large number of publications dedicated to regulation of intracellular Cl-, our understanding of the underlying mechanisms is not complete. Experiments on GABA actions under resting steady-state have shown that the effect of GABA shifts from depolarizing to hyperpolarizing during maturation of cortical neurons. However, it remains unclear, whether conclusions from these steady-state measurements can be extrapolated to the highly dynamic situation within an intact and active neuronal network. Indeed, GABAergic signaling in active neuronal networks results in a continuous Cl- load, which must be constantly removed by efficient Cl- extrusion mechanisms. Therefore, it seems plausible to suggest that key parameters are the efficacy and subcellular distribution of Cl- transporters rather than the polarity of steady-state GABA actions. A further related question is: what are the mechanisms of Cl- regulation and homeostasis during pathophysiological conditions such as epilepsy in adults and neonates? Here I present results that were obtained by means of a newly developed method of measurements of the efficacy of a K-Cl cotransport. In Study I, the developmental profile of KCC2 functionality during development was analyzed both in dissociated neuronal cultures and in acute hippocampal slices. A novel method of photolysis of caged GABA in combination with Cl- loading to the somata was used in this study to assess the extrusion efficacy of KCC2. We demonstrated that these two preparations exhibit a different temporal profile of functional KCC2 upregulation. In Study II, we reported an observation of highly distorted dendritic spines in neurons cultured from KCC2-/- embryos. During their development in the culture dish, KCC2-lacking neurons failed to develop mature, mushroom-shaped dendritic spines but instead maintained an immature phenotype of long, branching and extremely motile protrusions. It was shown that the role of KCC2 in spine maturation is not based on its transport activity, but is mediated by interactions with cytoskeletal proteins. Another important player in Cl- regulation, NKCC1 and its role in the induction and maintenance of native Cl- gradients between the axon initial segment (AIS) and soma was the subject of Study III. There we demonstrated that this transporter mediates accumulation of Cl- in the axon initial segment of neocortical and hippocampal principal neurons. The results suggest that the reversal potential of the GABAA response triggered by distinct populations of interneurons show large subcellular variations. Finally, a novel mechanism of fast post-translational upregulation of the membrane-inserted, functionally active KCC2 pool during in-vivo neonatal seizures and epileptiform-like activity in vitro was identified and characterized in Study IV. The seizure-induced KCC2 upregulation may act as an intrinsic antiepileptogenic mechanism.
Resumo:
Tasaikäisen metsän alle muodostuvilla alikasvoksilla on merkitystä puunkorjuun, metsänuudistamisen, näkemä-ja maisema-analyysien sekä biodiversiteetin ja hiilitaseen arvioinnin kannalta. Ilma-aluksista tehtävä laserkeilaus on osoittautunut tehokkaaksi kaukokartoitusmenetelmäksi varttuneiden puustojen mittauksessa. Laserkeilauksen käyttöönotto operatiivisessa metsäsuunnittelussa mahdollistaa aiempaa tarkemman tiedon tuottamisen alikasvoksista, mikäli alikasvoksen ominaisuuksia voidaan tulkita laseraineistoista. Tässä työssä käytettiin tarkasti mitattuja maastokoealoja ja kaikulaserkeilausaineistoja (discrete return LiDAR) usealta vuodelta (1–2 km lentokorkeus, 0,9–9,7 pulssia m-2). Laserkeilausaineistot oli hankittu Optech ALTM3100 ja Leica ALS50-II sensoreilla. Koealat edustavat suomalaisia tasaikäisiä männiköitä eri kehitysvaiheissa. Tutkimuskysymykset olivat: 1) Minkälainen on alikasvoksesta saatu lasersignaali yksittäisen pulssin tasolla ja mitkä tekijät signaaliin vaikuttavat? 2) Mikä on käytännön sovelluksissa hyödynnettävien aluepohjaisten laserpiirteiden selitysvoima alikasvospuuston ominaisuuksien ennustamisessa? Erityisesti haluttiin selvittää, miten laserpulssin energiahäviöt ylempiin latvuskerroksiin vaikuttavat saatuun signaaliin, ja voidaanko laserkaikujen intensiteetille tehdä energiahäviöiden korjaus. Puulajien väliset erot laserkaiun intensiteetissä olivat pieniä ja vaihtelivat keilauksesta toiseen. Intensiteetin käyttömahdollisuudet alikasvoksen puulajin tulkinnassa ovat siten hyvin rajoittuneet. Energiahäviöt ylempiin latvuskerroksiin aiheuttivat alikasvoksesta saatuun lasersignaaliin kohinaa. Energiahäviöiden korjaus tehtiin alikasvoksesta saaduille laserpulssin 2. ja 3. kaiuille. Korjauksen avulla pystyttiin pienentämään kohteen sisäistä intensiteetin hajontaa ja parantamaan kohteiden luokittelutarkkuutta alikasvoskerroksessa. Käytettäessä 2. kaikuja oikeinluokitusprosentti luokituksessa maan ja yleisimmän puulajin välillä oli ennen korjausta 49,2–54,9 % ja korjauksen jälkeen 57,3–62,0 %. Vastaavat kappa-arvot olivat 0,03–0,13 ja 0,10–0,22. Tärkein energiahäviöitä selittävä tekijä oli pulssista saatujen aikaisempien kaikujen intensiteetti, mutta hieman merkitystä oli myös pulssin leikkausgeometrialla ylemmän latvuskerroksen puiden kanssa. Myös 3. kaiuilla luokitustarkkuus parani. Puulajien välillä havaittiin eroja siinä, kuinka herkästi ne tuottavat kaiun laserpulssin osuessa puuhun. Kuusi tuotti kaiun suuremmalla todennäköisyydellä kuin lehtipuut. Erityisen selvä tämä ero oli pulsseilla, joissa oli energiahäviöitä. Laserkaikujen korkeusjakaumapiirteet voivat siten olla riippuvaisia puulajista. Sensorien välillä havaittiin selviä eroja intensiteettijakaumissa, mikä vaikeuttaa eri sensoreilla hankittujen aineistojen yhdistämistä. Myös kaiun todennäköisyydet erosivat jonkin verran sensorien välillä, mikä aiheutti pieniä eroavaisuuksia kaikujen korkeusjakaumiin. Aluepohjaisista laserpiirteistä löydettiin alikasvoksen runkolukua ja keskipituutta hyvin selittäviä piirteitä, kun rajoitettiin tarkastelu yli 1 m pituisiin puihin. Piirteiden selitysvoima oli parempi runkoluvulle kuin keskipituudelle. Selitysvoima ei merkittävästi alentunut pulssitiheyden pienentyessä, mikä on hyvä asia käytännön sovelluksia ajatellen. Lehtipuun osuutta ei pystytty selittämään. Tulosten perusteella kaikulaserkeilausta voi olla mahdollista hyödyntää esimerkiksi ennakkoraivaustarpeen arvioinnissa. Sen sijaan alikasvoksen tarkempi luokittelu (esim. puulajitulkinta) voi olla vaikeaa. Kaikkein pienimpiä alikasvospuita ei pystytä havaitsemaan. Lisää tutkimuksia tarvitaan tulosten yleistämiseksi erilaisiin metsiköihin.
Resumo:
Competition is an immensely important area of study in economic theory, business and strategy. It is known to be vital in meeting consumers’ growing expectations, stimulating increase in the size of the market, pushing innovation, reducing cost and consequently generating better value for end users, among other things. Having said that, it is important to recognize that supply chains, as we know it, has changed the way companies deal with each other both in confrontational or conciliatory terms. As such, with the rise of global markets and outsourcing destinations, increased technological development in transportation, communication and telecommunications has meant that geographical barriers of distance with regards to competition are a thing of the past in an increasingly flat world. Even though the dominant articulation of competition within management and business literature rests mostly within economic competition theory, this thesis draws attention to the implicit shift in the recognition of other forms of competition in today’s business environment, especially those involving supply chain structures. Thus, there is popular agreement within a broad business arena that competition between companies is set to take place along their supply chains. Hence, management’s attention has been focused on how supply chains could become more aggressive making each firm in its supply chain more efficient. However, there is much disagreement on the mechanism through which such competition pitching supply chain against supply chain will take place. The purpose of this thesis therefore, is to develop and conceptualize the notion of supply chain vs. supply chain competition, within the discipline of supply chain management. The thesis proposes that competition between supply chains may be carried forward via the use of competition theories that emphasize interaction and dimensionality, hence, encountering friction from a number of sources in their search for critical resources and services. The thesis demonstrates how supply chain vs. supply chain competition may be carried out theoretically, using generated data for illustration, and practically using logistics centers as a way to provide a link between theory and corresponding practice of this evolving competition mode. The thesis concludes that supply chain vs. supply chain competition, no matter the conceptualization taken, is complex, novel and can be very easily distorted and abused. It therefore calls for the joint development of regulatory measures by practitioners and policymakers alike, to guide this developing mode of competition.
Resumo:
Bayesian networks are compact, flexible, and interpretable representations of a joint distribution. When the network structure is unknown but there are observational data at hand, one can try to learn the network structure. This is called structure discovery. This thesis contributes to two areas of structure discovery in Bayesian networks: space--time tradeoffs and learning ancestor relations. The fastest exact algorithms for structure discovery in Bayesian networks are based on dynamic programming and use excessive amounts of space. Motivated by the space usage, several schemes for trading space against time are presented. These schemes are presented in a general setting for a class of computational problems called permutation problems; structure discovery in Bayesian networks is seen as a challenging variant of the permutation problems. The main contribution in the area of the space--time tradeoffs is the partial order approach, in which the standard dynamic programming algorithm is extended to run over partial orders. In particular, a certain family of partial orders called parallel bucket orders is considered. A partial order scheme that provably yields an optimal space--time tradeoff within parallel bucket orders is presented. Also practical issues concerning parallel bucket orders are discussed. Learning ancestor relations, that is, directed paths between nodes, is motivated by the need for robust summaries of the network structures when there are unobserved nodes at work. Ancestor relations are nonmodular features and hence learning them is more difficult than modular features. A dynamic programming algorithm is presented for computing posterior probabilities of ancestor relations exactly. Empirical tests suggest that ancestor relations can be learned from observational data almost as accurately as arcs even in the presence of unobserved nodes.