20 resultados para two-mass model

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The educational reform, launched in Finland in 2008, concerns the implementation of the Special Education Strategy (Opetusministeriö 2007) under an improvement initiative called Kelpo. One of the main proposed alterations of the Strategy relates to the support system of comprehensive school pupils. The existed two-level model (general and special support) is to be altered by the new three-level model (general, intensified and special support). There are 233 municipalities involved nationwide in the Kelpo initiative, each of which has a municipal coordinator as a national delegate. The Centre for Educational Assessment [the Centre] at the University of Helsinki, led by Professor Jarkko Hautamäki, carries out the developmental assessment of the initiative’s developmental process. As a part of that assessment the Centre interviewed 151 municipal coordinators in November 2008. This thesis considers the Kelpo initiative from Michael Fullan’s change theory’s aspect. The aim is to identify the change theoretical factors in the speech of the municipal coordinators interviewed by the Centre, and to constitute a view of what the crucial factors in the reform implementation process are. The appearance of the change theoretical factors, in the coordinators’ speech, and the meaning of these appearances are being considered from the change process point of view. The Centre collected the data by interviewing the municipal coordinators (n=151) in small groups of 4-11 people. The interview method was based on Vesala and Rantanen’s (2007) qualitative attitude survey method which was adapted and evolved for the Centre’s developmental assessment by Hilasvuori. The method of the analysis was a qualitative theory-based content analysis, processed using the Atlas.ti software. The theoretical frame of reference was grounded on Fullan’s change theory and the analysis was based on three change theoretical categories: implementation, cooperation and perspectives in the change process. The analysis of the interview data revealed spoken expressions in the coordinators’ speech which were either positively or negatively related to the theoretical categories. On the grounds of these change theoretical relations the existence of the change process was observed. The crucial factors of reform implementation were found, and the conclusion is that the encounter of the new reform-based and already existing strategies in school produces interface challenges. These challenges are particularly confronted in the context of the implementation of the new three-level support model. The interface challenges are classified as follows: conceptual, method-based, action-based and belief-based challenges. Keywords: reform, implementation, change process, Michael Fullan, Kelpo, intensified support, special support

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image co­ordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface

Relevância:

80.00% 80.00%

Publicador:

Resumo:

What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To a large extent, lakes can be described with a one-dimensional approach, as their main features can be characterized by the vertical temperature profile of the water. The development of the profiles during the year follows the seasonal climate variations. Depending on conditions, lakes become stratified during the warm summer. After cooling, overturn occurs, water cools and an ice cover forms. Typically, water is inversely stratified under the ice, and another overturn occurs in spring after the ice has melted. Features of this circulation have been used in studies to distinguish between lakes in different areas, as basis for observation systems and even as climate indicators. Numerical models can be used to calculate temperature in the lake, on the basis of the meteorological input at the surface. The simple form is to solve the surface temperature. The depth of the lake affects heat transfer, together with other morphological features, the shape and size of the lake. Also the surrounding landscape affects the formation of the meteorological fields over the lake and the energy input. For small lakes the shading by the shores affects both over the lake and inside the water body bringing limitations for the one-dimensional approach. A two-layer model gives an approximation for the basic stratification in the lake. A turbulence model can simulate vertical temperature profile in a more detailed way. If the shape of the temperature profile is very abrupt, vertical transfer is hindered, having many important consequences for lake biology. One-dimensional modelling approach was successfully studied comparing a one-layer model, a two-layer model and a turbulence model. The turbulence model was applied to lakes with different sizes, shapes and locations. Lake models need data from the lakes for model adjustment. The use of the meteorological input data on different scales was analysed, ranging from momentary turbulent changes over the lake to the use of the synoptical data with three hour intervals. Data over about 100 past years were used on the mesoscale at the range of about 100 km and climate change scenarios for future changes. Increasing air temperature typically increases water temperature in epilimnion and decreases ice cover. Lake ice data were used for modelling different kinds of lakes. They were also analyzed statistically in global context. The results were also compared with results of a hydrological watershed model and data from very small lakes for seasonal development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research has been prompted by an interest in the atmospheric processes of hydrogen. The sources and sinks of hydrogen are important to know, particularly if hydrogen becomes more common as a replacement for fossil fuel in combustion. Hydrogen deposition velocities (vd) were estimated by applying chamber measurements, a radon tracer method and a two-dimensional model. These three approaches were compared with each other to discover the factors affecting the soil uptake rate. A static-closed chamber technique was introduced to determine the hydrogen deposition velocity values in an urban park in Helsinki, and at a rural site at Loppi. A three-day chamber campaign to carry out soil uptake estimation was held at a remote site at Pallas in 2007 and 2008. The atmospheric mixing ratio of molecular hydrogen has also been measured by a continuous method in Helsinki in 2007 - 2008 and at Pallas from 2006 onwards. The mean vd values measured in the chamber experiments in Helsinki and Loppi were between 0.0 and 0.7 mm s-1. The ranges of the results with the radon tracer method and the two-dimensional model were 0.13 - 0.93 mm s-1 and 0.12 - 0.61 mm s-1, respectively, in Helsinki. The vd values in the three-day campaign at Pallas were 0.06 - 0.52 mm s-1 (chamber) and 0.18 - 0.52 mm s-1 (radon tracer method and two-dimensional model). At Kumpula, the radon tracer method and the chamber measurements produced higher vd values than the two-dimensional model. The results of all three methods were close to each other between November and April, except for the chamber results from January to March, while the soil was frozen. The hydrogen deposition velocity values of all three methods were compared with one-week cumulative rain sums. Precipitation increases the soil moisture, which decreases the soil uptake rate. The measurements made in snow seasons showed that a thick snow layer also hindered gas diffusion, lowering the vd values. The H2 vd values were compared to the snow depth. A decaying exponential fit was obtained as a result. During a prolonged drought in summer 2006, soil moisture values were lower than in other summer months between 2005 and 2008. Such conditions were prevailing in summer 2006 when high chamber vd values were measured. The mixing ratio of molecular hydrogen has a seasonal variation. The lowest atmospheric mixing ratios were found in the late autumn when high deposition velocity values were still being measured. The carbon monoxide (CO) mixing ratio was also measured. Hydrogen and carbon monoxide are highly correlated in an urban environment, due to the emissions originating from traffic. After correction for the soil deposition of H2, the slope was 0.49±0.07 ppb (H2) / ppb (CO). Using the corrected hydrogen-to-carbon-monoxide ratio, the total hydrogen load emitted by Helsinki traffic in 2007 was 261 t (H2) a-1. Hydrogen, methane and carbon monoxide are connected with each other through the atmospheric methane oxidation process, in which formaldehyde is produced as an important intermediate. The photochemical degradation of formaldehyde produces hydrogen and carbon monoxide as end products. Examination of back-trajectories revealed long-range transportation of carbon monoxide and methane. The trajectories can be grouped by applying cluster and source analysis methods. Thus natural and anthropogenic emission sources can be separated by analyzing trajectory clusters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Veri-aivoeste suojelee aivoja verenkierron vierasaineilta. Veri-aivoestettä tutkivia in vivo ja in vitro -menetelmiä on raportoitu laajasti kirjallisuudessa. Yhdisteiden farmakokinetiikka aivoissa kuvaavia tietokonemalleja on esitetty vain muutamia. Tässä tutkimuksessa kerättiin kirjallisuudesta aineisto eri in vitro ja in vivo -menetelmillä määritetyistä veri-aivoesteen permeabiliteettikertoimista. Lisäksi tutkimuksessa rakennettiin kaksi veri-aivoesteen farmakokineettista tietokonemallia, mikrodialyysimalli ja efluksimalli. Mikrodialyysimalli on yksinkertainen kahdesta tilasta (verenkierto ja aivot) koostuva farmakokineettinen malli. Mikrodialyysimallilla simuloitiin in vivo määritettyjen parametrien perusteella viiden yhdisteen pitoisuuksia rotan aivoissa ja verenkierrossa. Mallilla ei saatu täsmällisesti in vivo -tilannetta vastaavia pitoisuuskuvaajia johtuen mallin rakenteessa tehdyistä yksinkertaistuksista, kuten aivokudostilan ja kuljetinproteiinien kinetiikan puuttuminen. Efluksimallissa on kolme tilaa, verenkierto, veri-aivoesteen endoteelisolutila ja aivot. Efluksimallilla tutkittiin teoreettisten simulaatioiden avulla veri-aivoesteen luminaalisella membraanilla sijaitsevan aktiivisen efluksiproteiinin ja passiivisen permeaation merkitystä yhdisteen pitoisuuksiin aivojen solunulkoisessa nesteessä. Tutkittava parametri oli vapaan yhdisteen pitoisuuksien suhde aivojen ja verenkierron välillä vakaassa tilassa (Kp,uu). Tuloksissa havaittiin efluksiproteiinin vaikutus pitoisuuksiin Michaelis-Mentenin kinetiikan mukaisesti. Efluksimalli sopii hyvin teoreettisten simulaatioiden tekemiseen. Malliin voidaan lisätä aktiivisia kuljettimia. Teoreettisten simulaatioiden avulla voidaan yhdistää in vitro ja in vivo tutkimuksien tuloksia ja osatekijöitä voidaan tutkia yhdessä simulaatiossa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present challenge in drug discovery is to synthesize new compounds efficiently in minimal time. The trend is towards carefully designed and well-characterized compound libraries because fast and effective synthesis methods easily produce thousands of new compounds. The need for rapid and reliable analysis methods is increased at the same time. Quality assessment, including the identification and purity tests, is highly important since false (negative or positive) results, for instance in tests of biological activity or determination of early-ADME parameters in vitro (the pharmacokinetic study of drug absorption, distribution, metabolism, and excretion), must be avoided. This thesis summarizes the principles of classical planar chromatographic separation combined with ultraviolet (UV) and mass spectrometric (MS) detection, and introduces powerful, rapid, easy, low-cost, and alternative tools and techniques for qualitative and quantitative analysis of small drug or drug-like molecules. High performance thin-layer chromatography (HPTLC) was introduced and evaluated for fast semi-quantitative assessment of the purity of synthesis target compounds. HPTLC methods were compared with the liquid chromatography (LC) methods. Electrospray ionization mass spectrometry (ESI MS) and atmospheric pressure matrix-assisted laser desorption/ionization MS (AP MALDI MS) were used to identify and confirm the product zones on the plate. AP MALDI MS was rapid, and easy to carry out directly on the plate without scraping. The PLC method was used to isolate target compounds from crude synthesized products and purify them for bioactivity and preliminary ADME tests. Ultra-thin-layer chromatography (UTLC) with AP MALDI MS and desorption electrospray ionization mass spectrometry (DESI MS) was introduced and studied for the first time. Because of the thinner adsorbent layer, the monolithic UTLC plate provided 10 100 times better sensitivity in MALDI analysis than did HPTLC plates. The limits of detection (LODs) down to low picomole range were demonstrated for UTLC AP MALDI and UTLC DESI MS. In a comparison of AP and vacuum MALDI MS detection for UTLC plates, desorption from the irregular surface of the plates with the combination of an external AP MALDI ion source and an ion trap instrument provided clearly less variation in mass accuracy than the vacuum MALDI time-of-flight (TOF) instrument. The performance of the two-dimensional (2D) UTLC separation with AP MALDI MS method was studied for the first time. The influence of the urine matrix on the separation and the repeatability was evaluated with benzodiazepines as model substances in human urine. The applicability of 2D UTLC AP MALDI MS was demonstrated in the detection of metabolites in an authentic urine sample.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a search for a Higgs boson decaying to two W bosons in ppbar collisions at sqrt(s)=1.96 TeV center-of-mass energy. The data sample corresponds to an integrated luminosity of 3.0 fb-1 collected with the CDF II detector. We find no evidence for production of a Higgs boson with mass between 110 and 200 GeV/c^2, and determine upper limits on the production cross section. For the mass of 160 GeV/c^2, where the analysis is most sensitive, the observed (expected) limit is 0.7 pb (0.9 pb) at 95% Bayesian credibility level which is 1.7 (2.2) times the standard model cross section.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a search for standard model (SM) Higgs boson production using ppbar collision data at sqrt(s) = 1.96 TeV, collected with the CDF II detector and corresponding to an integrated luminosity of 4.8 fb-1. We search for Higgs bosons produced in all processes with a significant production rate and decaying to two W bosons. We find no evidence for SM Higgs boson production and place upper limits at the 95% confidence level on the SM production cross section (sigma(H)) for values of the Higgs boson mass (m_H) in the range from 110 to 200 GeV. These limits are the most stringent for m_H > 130 GeV and are 1.29 above the predicted value of sigma(H) for mH = 165 GeV.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using data from 2.9  fb-1 of integrated luminosity collected with the CDF II detector at the Tevatron, we search for resonances decaying into a pair of on-shell gauge bosons, WW or WZ, where one W decays into an electron and a neutrino, and the other boson decays into two jets. We observed no statistically significant excess above the expected standard model background, and we set cross section limits at 95% confidence level on G* (Randall-Sundrum graviton), Z′, and W′ bosons. By comparing these limits to theoretical cross sections, mass exclusion regions for the three particles are derived. The mass exclusion regions for Z′ and W′ are further evaluated as a function of their gauge coupling strength.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using data from 2.9/fb of integrated luminosity collected with the CDF II detector at the Tevatron, we search for resonances decaying into a pair of on-shell gauge bosons, WW or WZ, where one W decays into an electron and a neutrino, and the other boson decays into two jets. We observed no statistically significant excess above the expected standard model background, and we set cross section limits at 95% confidence level on G*(Randall-Sundrum graviton), Z', and W' bosons. By comparing these limits to theoretical cross sections, mass exclusion regions for the three particles are derived. The mass exclusion regions for Z' and W' are further evaluated as a function of their gauge coupling strength.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an analysis of the mass of the X(3872) reconstructed via its decay to J/psi pi+ pi- using 2.4 fb^-1 of integrated luminosity from ppbar collisions at sqrt(s) = 1.96 TeV, collected with the CDF II detector at the Fermilab Tevatron. The possible existence of two nearby mass states is investigated. Within the limits of our experimental resolution the data are consistent with a single state, and having no evidence for two states we set upper limits on the mass difference between two hypothetical states for different assumed ratios of contributions to the observed peak. For equal contributions, the 95% confidence level upper limit on the mass difference is 3.6 MeV/c^2. Under the single-state model the X(3872) mass is measured to be 3871.61 +- 0.16 (stat) +- 0.19 (syst) MeV/c^2, which is the most precise determination to date.