23 resultados para Probabilities.

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constructive (intuitionist, anti-realist) semantics has thus far been lacking an adequate concept of truth in infinity concerning factual (i.e., empirical, non-mathematical) sentences. One consequence of this problem is the difficulty of incorporating inductive reasoning in constructive semantics. It is not possible to formulate a notion for probable truth in infinity if there is no adequate notion of what truth in infinity is. One needs a notion of a constructive possible world based on sensory experience. Moreover, a constructive probability measure must be defined over these constructively possible empirical worlds. This study defines a particular kind of approach to the concept of truth in infinity for Rudolf Carnap's inductive logic. The new approach is based on truth in the consecutive finite domains of individuals. This concept will be given a constructive interpretation. What can be verifiably said about an empirical statement with respect to this concept of truth, will be explained, for which purpose a constructive notion of epistemic probability will be introduced. The aim of this study is also to improve Carnap's inductive logic. The study addresses the problem of justifying the use of an "inductivist" method in Carnap's lambda-continuum. A correction rule for adjusting the inductive method itself in the course of obtaining evidence will be introduced. Together with the constructive interpretation of probability, the correction rule yields positive prior probabilities for universal generalizations in infinite domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study I discuss G. W. Leibniz's (1646-1716) views on rational decision-making from the standpoint of both God and man. The Divine decision takes place within creation, as God freely chooses the best from an infinite number of possible worlds. While God's choice is based on absolutely certain knowledge, human decisions on practical matters are mostly based on uncertain knowledge. However, in many respects they could be regarded as analogous in more complicated situations. In addition to giving an overview of the divine decision-making and discussing critically the criteria God favours in his choice, I provide an account of Leibniz's views on human deliberation, which includes some new ideas. One of these concerns is the importance of estimating probabilities in making decisions one estimates both the goodness of the act itself and its consequences as far as the desired good is concerned. Another idea is related to the plurality of goods in complicated decisions and the competition this may provoke. Thirdly, heuristic models are used to sketch situations under deliberation in order to help in making the decision. Combining the views of Marcelo Dascal, Jaakko Hintikka and Simo Knuuttila, I argue that Leibniz applied two kinds of models of rational decision-making to practical controversies, often without explicating the details. The more simple, traditional pair of scales model is best suited to cases in which one has to decide for or against some option, or to distribute goods among parties and strive for a compromise. What may be of more help in more complicated deliberations is the novel vectorial model, which is an instance of the general mathematical doctrine of the calculus of variations. To illustrate this distinction, I discuss some cases in which he apparently applied these models in different kinds of situation. These examples support the view that the models had a systematic value in his theory of practical rationality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Attention deficit hyperactivity disorder (ADHD) is a life-long condition, but because of its historical status as a self-remitting disorder of childhood, empirically validated and reliable methods for the assessment of adults are scarce. In this study, the validity and reliability of the Wender Utah Rating Scale (WURS) and the Adult Problem Questionnaire (APQ), which survey childhood and current symptoms of ADHD, respectively, were studied in a Finnish sample. Methods: The self-rating scales were administered to adults with an ADHD diagnosis (n = 38), healthy control participants (n = 41), and adults diagnosed with dyslexia (n = 37). Items of the self-rating scales were subjected to factor analyses, after which the reliability and discriminatory power of the subscales, derived from the factors, were examined. The effects of group and gender on the subscales of both rating scales were studied. Additionally, the effect of age on the subscales of the WURS was investigated. Finally, the diagnostic accuracy of the total scores was studied. Results: On the basis of the factor analyses, a four-factor structure for the WURS and five-factor structure for the APQ had the best fit to the data. All of the subscales of the APQ and three of the WURS achieved sufficient reliability. The ADHD group had the highest scores on all of the subscales of the APQ, whereas two of the subscales of the WURS did not statistically differ between the ADHD and the Dyslexia group. None of the subscales of the WURS or the APQ was associated with the participant's gender. However, one subscale of the WURS describing dysthymia was positively correlated with the participant's age. With the WURS, the probability of a correct positive classification was .59 in the current sample and .21 when the relatively low prevalence of adult ADHD was taken into account. The probabilities of correct positive classifications with the APQ were .71 and .23, respectively. Conclusions: The WURS and the APQ can provide accurate and reliable information of childhood and adult ADHD symptoms, given some important constraints. Classifications made on the basis of the total scores are reliable predictors of ADHD diagnosis only in populations with a high proportion of ADHD and a low proportion of other similar disorders. The subscale scores can provide detailed information of an individual's symptoms if the characteristics and limitations of each domain are taken into account. Improvements are suggested for two subscales of the WURS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives of this study were to determine secular trends of diabetes prevalence in China and develop simple risk assessment algorithms for screening individuals with high-risk for diabetes or with undiagnosed diabetes in Chinese and Indian adults. Two consecutive population based surveys in Chinese and a prospective study in Mauritian Indians were involved in this study. The Chinese surveys were conducted in randomly selected populations aged 20-74 years in 2001-2002 (n=14 592) and 35-74 years in 2006 (n=4416). A two-step screening strategy using fasting capillary plasma glucose (FCG) as first-line screening test followed by standard 2-hour 75g oral glucose tolerance tests (OGTTs) was applied to 12 436 individuals in 2001, while OGTTs were administrated to all participants together with FCG in 2006 and to 2156 subjects in 2002. In Mauritius, two consecutive population based surveys were conducted in Mauritian Indians aged 20-65 years in 1987 and 1992; 3094 Indians (1141 men), who were not diagnosed as diabetes at baseline, were reexamined with OGTTs in 1992 and/or 1998. Diabetes and pre-diabetes was defined following 2006 World Health Organization/ International Diabetes Federation Criteria. Age-standardized, as well as age- and sex-specific, prevalence of diabetes and pre-diabetes in adult Chinese was significantly increased from 12.2% and 15.4% in 2001 to 16.0% and 21.2% in 2006, respectively. A simple Chinese diabetes risk score was developed based on the data of Chinese survey 2001-2002 and validated in the population of survey 2006. The risk scores based on β coefficients derived from the final Logistic regression model ranged from 3 – 32. When the score was applied to the population of survey 2006, the area under operating characteristic curve (AUC) of the score for screening undiagnosed diabetes was 0.67 (95% CI, 0.65-0.70), which was lower than the AUC of FCG (0.76 [0.74-0.79]), but similar to that of HbA1c (0.68 [0.65-0.71]). At a cut-off point of 14, the sensitivity and specificity of the risk score in screening undiagnosed diabetes was 0.84 (0.81-0.88) and 0.40 (0.38-0.41). In Mauritian Indian, body mass index (BMI), waist girth, family history of diabetes (FH), and glucose was confirmed to be independent risk predictors for developing diabetes. Predicted probabilities for developing diabetes derived from a simple Cox regression model fitted with sex, FH, BMI and waist girth ranged from 0.05 to 0.64 in men and 0.03 to 0.49 in women. To predict the onset of diabetes, the AUC of the predicted probabilities was 0.62 (95% CI, 0.56-0.68) in men and 0.64(0.59-0.69) in women. At a cut-off point of 0.12, the sensitivity and specificity was 0.72(0.71-0.74) and 0.47(0.45-0.49) in men; and 0.77(0.75-0.78) and 0.50(0.48-0.52) in women, respectively. In conclusion, there was a rapid increase in prevalence of diabetes in Chinese adults from 2001 to 2006. The simple risk assessment algorithms based on age, obesity and family history of diabetes showed a moderate discrimination of diabetes from non-diabetes, which may be used as first line screening tool for diabetes and pre-diabetes, and for health promotion purpose in Chinese and Indians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many species inhabit fragmented landscapes, resulting either from anthropogenic or from natural processes. The ecological and evolutionary dynamics of spatially structured populations are affected by a complex interplay between endogenous and exogenous factors. The metapopulation approach, simplifying the landscape to a discrete set of patches of breeding habitat surrounded by unsuitable matrix, has become a widely applied paradigm for the study of species inhabiting highly fragmented landscapes. In this thesis, I focus on the construction of biologically realistic models and their parameterization with empirical data, with the general objective of understanding how the interactions between individuals and their spatially structured environment affect ecological and evolutionary processes in fragmented landscapes. I study two hierarchically structured model systems, which are the Glanville fritillary butterfly in the Åland Islands, and a system of two interacting aphid species in the Tvärminne archipelago, both being located in South-Western Finland. The interesting and challenging feature of both study systems is that the population dynamics occur over multiple spatial scales that are linked by various processes. My main emphasis is in the development of mathematical and statistical methodologies. For the Glanville fritillary case study, I first build a Bayesian framework for the estimation of death rates and capture probabilities from mark-recapture data, with the novelty of accounting for variation among individuals in capture probabilities and survival. I then characterize the dispersal phase of the butterflies by deriving a mathematical approximation of a diffusion-based movement model applied to a network of patches. I use the movement model as a building block to construct an individual-based evolutionary model for the Glanville fritillary butterfly metapopulation. I parameterize the evolutionary model using a pattern-oriented approach, and use it to study how the landscape structure affects the evolution of dispersal. For the aphid case study, I develop a Bayesian model of hierarchical multi-scale metapopulation dynamics, where the observed extinction and colonization rates are decomposed into intrinsic rates operating specifically at each spatial scale. In summary, I show how analytical approaches, hierarchical Bayesian methods and individual-based simulations can be used individually or in combination to tackle complex problems from many different viewpoints. In particular, hierarchical Bayesian methods provide a useful tool for decomposing ecological complexity into more tractable components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wild salmon stocks in the northern Baltic rivers became endangered in the second half of the 20th century, mainly due to recruitment overfishing. As a result, supplementary stocking was widely practised, and supplementation of the Tornionjoki salmon stock took place over a 25 year period until 2002. The stock has been closely monitored by electrofishing, smolt trapping, mark-recapture studies, catch samples and catch surveys. Background information on hatchery-reared stocked juveniles was also collected for this study. Bayesian statistics was applied to the data as this method offers the possibility of bringing prior information into the analysis and an advanced ability for incorporating uncertainty, and also provides probabilities for a multitude of hypotheses. Substantial divergences between reared and wild Tornionjoki salmon were identified in both demographic and phenological characteristics. The divergences tended to be larger the longer the duration spent in hatchery and the more favourable the hatchery conditions were for fast growth. Differences in environment likely induced most of the divergences, but selection of brood fish might have resulted in genotypic divergence in maturation age of reared salmon. Survival of stocked 1-year old juveniles to smolt varied from about 10% to about 25%. Stocking on the lower reach of the river seemed to decrease survival, and the negative effect of stocking volume on survival raises the concern of possible similar effects on the extant wild population. Post-smolt survival of wild Tornionjoki smolts was on average two times higher than that of smolts stocked as parr and 2.5 times higher than that of stocked smolts. Smolts of different groups showed synchronous variation and similar long-term survival trends. Both groups of reared salmon were more vulnerable to offshore driftnet and coastal trapnet fishing than wild salmon. Average survival from smolt to spawners of wild salmon was 2.8 times higher than that of salmon stocked as parr and 3.3 times higher than that of salmon stocked as smolts. Wild salmon and salmon stocked as parr were found to have similar lifetime survival rates, while stocked smolts have a lifetime survival rate over 4 times higher than the two other groups. If eggs are collected from the wild brood fish, stocking parr would therefore not be a sensible option. Stocking smolts instead would create a net benefit in terms of the number of spawners, but this strategy has serious drawbacks and risks associated with the larger phenotypic and demographic divergences from wild salmon. Supplementation was shown not to be the key factor behind the recovery of the Tornionjoki and other northern Baltic salmon stocks. Instead, a combination of restrictions in the sea fishery and simultaneous occurrence of favourable natural conditions for survival were the main reasons for the revival in the 1990 s. This study questions the effectiveness of supplementation as a conservation management tool. The benefits of supplementation seem at best limited. Relatively high occurrences of reared fish in catches may generate false optimism concerning the effects of supplementation. Supplementation may lead to genetic risks due to problems in brood fish collection and artificial rearing with relaxed natural selection and domestication. Appropriate management of fisheries is the main alternative to supplementation, without which all other efforts for long-term maintenance of a healthy fish resource fail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a search for associated production of the standard model (SM) Higgs boson and a $Z$ boson where the $Z$ boson decays to two leptons and the Higgs decays to a pair of $b$ quarks in $p\bar{p}$ collisions at the Fermilab Tevatron. We use event probabilities based on SM matrix elements to construct a likelihood function of the Higgs content of the data sample. In a CDF data sample corresponding to an integrated luminosity of 2.7 fb$^{-1}$ we see no evidence of a Higgs boson with a mass between 100 GeV$/c^2$ and 150 GeV$/c^2$. We set 95% confidence level (C.L.) upper limits on the cross-section for $ZH$ production as a function of the Higgs boson mass $m_H$; the limit is 8.2 times the SM prediction at $m_H = 115$ GeV$/c^2$.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the first measurement of the cross section for Z boson pair production at a hadron collider. This result is based on a data sample corresponding to 1.9 fb-1 of integrated luminosity from ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. In the llll channel, we observe three ZZ candidates with an expected background of 0.096^{+0.092}_{-0.063} events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of llll and llnunu channels, we observe an excess of events with a probability of $5.1\times 10^{-6}$ to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(ppbar -> ZZ) = 1.4^{+0.7}_{-0.6} (stat.+syst.) pb, consistent with the standard model expectation.