26 resultados para Mass-consistent model

em Helda - Digital Repository of University of Helsinki


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an analysis of the mass of the X(3872) reconstructed via its decay to J/psi pi+ pi- using 2.4 fb^-1 of integrated luminosity from ppbar collisions at sqrt(s) = 1.96 TeV, collected with the CDF II detector at the Fermilab Tevatron. The possible existence of two nearby mass states is investigated. Within the limits of our experimental resolution the data are consistent with a single state, and having no evidence for two states we set upper limits on the mass difference between two hypothetical states for different assumed ratios of contributions to the observed peak. For equal contributions, the 95% confidence level upper limit on the mass difference is 3.6 MeV/c^2. Under the single-state model the X(3872) mass is measured to be 3871.61 +- 0.16 (stat) +- 0.19 (syst) MeV/c^2, which is the most precise determination to date.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present challenge in drug discovery is to synthesize new compounds efficiently in minimal time. The trend is towards carefully designed and well-characterized compound libraries because fast and effective synthesis methods easily produce thousands of new compounds. The need for rapid and reliable analysis methods is increased at the same time. Quality assessment, including the identification and purity tests, is highly important since false (negative or positive) results, for instance in tests of biological activity or determination of early-ADME parameters in vitro (the pharmacokinetic study of drug absorption, distribution, metabolism, and excretion), must be avoided. This thesis summarizes the principles of classical planar chromatographic separation combined with ultraviolet (UV) and mass spectrometric (MS) detection, and introduces powerful, rapid, easy, low-cost, and alternative tools and techniques for qualitative and quantitative analysis of small drug or drug-like molecules. High performance thin-layer chromatography (HPTLC) was introduced and evaluated for fast semi-quantitative assessment of the purity of synthesis target compounds. HPTLC methods were compared with the liquid chromatography (LC) methods. Electrospray ionization mass spectrometry (ESI MS) and atmospheric pressure matrix-assisted laser desorption/ionization MS (AP MALDI MS) were used to identify and confirm the product zones on the plate. AP MALDI MS was rapid, and easy to carry out directly on the plate without scraping. The PLC method was used to isolate target compounds from crude synthesized products and purify them for bioactivity and preliminary ADME tests. Ultra-thin-layer chromatography (UTLC) with AP MALDI MS and desorption electrospray ionization mass spectrometry (DESI MS) was introduced and studied for the first time. Because of the thinner adsorbent layer, the monolithic UTLC plate provided 10 100 times better sensitivity in MALDI analysis than did HPTLC plates. The limits of detection (LODs) down to low picomole range were demonstrated for UTLC AP MALDI and UTLC DESI MS. In a comparison of AP and vacuum MALDI MS detection for UTLC plates, desorption from the irregular surface of the plates with the combination of an external AP MALDI ion source and an ion trap instrument provided clearly less variation in mass accuracy than the vacuum MALDI time-of-flight (TOF) instrument. The performance of the two-dimensional (2D) UTLC separation with AP MALDI MS method was studied for the first time. The influence of the urine matrix on the separation and the repeatability was evaluated with benzodiazepines as model substances in human urine. The applicability of 2D UTLC AP MALDI MS was demonstrated in the detection of metabolites in an authentic urine sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A population-based early detection program for breast cancer has been in progress in Finland since 1987. According to regulations during the study period 1987-2001, free of charge mammography screening was offered every second year to women aged 50-59 years. Recently, the screening service was decided to be extended to age group 50-69. However, the scope of the program is still frequently discussed in public and information about potential impacts of mass-screening practice changes on future breast cancer burden is required. The aim of this doctoral thesis is to present methodologies for taking into account the mass-screening invitation information in breast cancer burden predictions, and to present alternative breast cancer incidence and mortality predictions up to 2012 based on scenarios of the future screening policy. The focus of this work is not on assessing the absolute efficacy but the effectiveness of mass-screening, and, by utilizing the data on invitations, on showing the estimated impacts of changes in an existing screening program on the short-term predictions. The breast cancer mortality predictions are calculated using a model that combines incidence, cause-specific and other cause survival on individual level. The screening invitation data are incorporated into modeling of breast cancer incidence and survival by dividing the program into separate components (first and subsequent rounds and years within them, breaks, and post screening period) and defining a variable that gives the component of the screening program. The incidence is modeled using a Poisson regression approach and the breast cancer survival by applying a parametric mixture cure model, where the patient population is allowed to be a combination of cured and uncured patients. The patients risk to die from other causes than breast cancer is allowed to differ from that of a corresponding general population group and to depend on age and follow-up time. As a result, the effects of separate components of the screening program on incidence, proportion of cured and the survival of the uncured are quantified. According to the predictions, the impacts of policy changes, like extending the program from age group 50-59 to 50-69, are clearly visible on incidence while the effects on mortality in age group 40-74 are minor. Extending the screening service would increase the incidence of localized breast cancers but decrease the rates of non-localized breast cancer. There were no major differences between mortality predictions yielded by alternative future scenarios of the screening policy: Any policy change would have at the most a 3.0% reduction on overall breast cancer mortality compared to continuing the current practice in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Buffer zones are vegetated strip-edges of agricultural fields along watercourses. As linear habitats in agricultural ecosystems, buffer strips dominate and play a leading ecological role in many areas. This thesis focuses on the plant species diversity of the buffer zones in a Finnish agricultural landscape. The main objective of the present study is to identify the determinants of floral species diversity in arable buffer zones from local to regional levels. This study was conducted in a watershed area of a farmland landscape of southern Finland. The study area, Lepsämänjoki, is situated in the Nurmijärvi commune 30 km to the north of Helsinki, Finland. The biotope mosaics were mapped in GIS. A total of 59 buffer zones were surveyed, of which 29 buffer strips surveyed were also sampled by plot. Firstly, two diversity components (species richness and evenness) were investigated to determine whether the relationship between the two is equal and predictable. I found no correlation between species richness and evenness. The relationship between richness and evenness is unpredictable in a small-scale human-shaped ecosystem. Ordination and correlation analyses show that richness and evenness may result from different ecological processes, and thus should be considered separately. Species richness correlated negatively with phosphorus content, and species evenness correlated negatively with the ratio of organic carbon to total nitrogen in soil. The lack of a consistent pattern in the relationship between these two components may be due to site-specific variation in resource utilization by plant species. Within-habitat configuration (width, length, and area) were investigated to determine which is more effective for predicting species richness. More species per unit area increment could be obtained from widening the buffer strip than from lengthening it. The width of the strips is an effective determinant of plant species richness. The increase in species diversity with an increase in the width of buffer strips may be due to cross-sectional habitat gradients within the linear patches. This result can serve as a reference for policy makers, and has application value in agricultural management. In the framework of metacommunity theory, I found that both mass effect(connectivity) and species sorting (resource heterogeneity) were likely to explain species composition and diversity on a local and regional scale. The local and regional processes were interactively dominated by the degree to which dispersal perturbs local communities. In the lowly and intermediately connected regions, species sorting was of primary importance to explain species diversity, while the mass effect surpassed species sorting in the highly connected region. Increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities, and consequently, to lower regional diversity, while local species richness was unrelated to the habitat connectivity. Of all species found, Anthriscus sylvestris, Phalaris arundinacea, and Phleum pretense significantly responded to connectivity, and showed high abundance in the highly connected region. We suggest that these species may play a role in switching the force from local resources to regional connectivity shaping the community structure. On the landscape context level, the different responses of local species richness and evenness to landscape context were investigated. Seven landscape structural parameters served to indicate landscape context on five scales. On all scales but the smallest scales, the Shannon-Wiener diversity of land covers (H') correlated positively with the local richness. The factor (H') showed the highest correlation coefficients in species richness on the second largest scale. The edge density of arable field was the only predictor that correlated with species evenness on all scales, which showed the highest predictive power on the second smallest scale. The different predictive power of the factors on different scales showed a scaledependent relationship between the landscape context and local plant species diversity, and indicated that different ecological processes determine species richness and evenness. The local richness of species depends on a regional process on large scales, which may relate to the regional species pool, while species evenness depends on a fine- or coarse-grained farming system, which may relate to the patch quality of the habitats of field edges near the buffer strips. My results suggested some guidelines of species diversity conservation in the agricultural ecosystem. To maintain a high level of species diversity in the strips, a high level of phosphorus in strip soil should be avoided. Widening the strips is the most effective mean to improve species richness. Habitat connectivity is not always favorable to species diversity because increasing connectivity in communities containing high habitat heterogeneity can lead to the homogenization of local communities (beta diversity) and, consequently, to lower regional diversity. Overall, a synthesis of local and regional factors emerged as the model that best explain variations in plant species diversity. The studies also suggest that the effects of determinants on species diversity have a complex relationship with scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"We report on a search for the standard-model Higgs boson in pp collisions at s=1.96 TeV using an integrated luminosity of 2.0 fb(-1). We look for production of the Higgs boson decaying to a pair of bottom quarks in association with a vector boson V (W or Z) decaying to quarks, resulting in a four-jet final state. Two of the jets are required to have secondary vertices consistent with B-hadron decays. We set the first 95% confidence level upper limit on the VH production cross section with V(-> qq/qq('))H(-> bb) decay for Higgs boson masses of 100-150 GeV/c(2) using data from run II at the Fermilab Tevatron. For m(H)=120 GeV/c(2), we exclude cross sections larger than 38 times the standard-model prediction."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We combine searches by the CDF and D0 collaborations for a Higgs boson decaying to W+W-. The data correspond to an integrated total luminosity of 4.8 (CDF) and 5.4 (D0) fb-1 of p-pbar collisions at sqrt{s}=1.96 TeV at the Fermilab Tevatron collider. No excess is observed above background expectation, and resulting limits on Higgs boson production exclude a standard-model Higgs boson in the mass range 162-166 GeV at the 95% C.L.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We combine results from searches by the CDF and D0 collaborations for a standard model Higgs boson (H) in the process gg->H->W+W- in p=pbar collisions at the Fermilab Tevatron Collider at sqrt{s}=1.96 TeV. With 4.8 fb-1 of integrated luminosity analyzed at CDF and 5.4 fb-1 at D0, the 95% Confidence Level upper limit on \sigma(gg->H) x B(H->W+W-) is 1.75 pb at m_H=120 GeV, 0.38 pb at m_H=165 GeV, and 0.83 pb at m_H=200 GeV. Assuming the presence of a fourth sequential generation of fermions with large masses, we exclude at the 95% Confidence Level a standard-model-like Higgs boson with a mass between 131 and 204 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a search for standard model (SM) Higgs boson production using ppbar collision data at sqrt(s) = 1.96 TeV, collected with the CDF II detector and corresponding to an integrated luminosity of 4.8 fb-1. We search for Higgs bosons produced in all processes with a significant production rate and decaying to two W bosons. We find no evidence for SM Higgs boson production and place upper limits at the 95% confidence level on the SM production cross section (sigma(H)) for values of the Higgs boson mass (m_H) in the range from 110 to 200 GeV. These limits are the most stringent for m_H > 130 GeV and are 1.29 above the predicted value of sigma(H) for mH = 165 GeV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report measurements of the polarization of W bosons from top-quark decays using 2.7 fb^-1 of ppbar collisions collected by the CDF II detector. Assuming a top-quark mass of 175 GeV/c^2, three measurements are performed. A simultaneous measurement of the fraction of longitudinal (f_0) and right-handed (f_+) W bosons yields the model-independent results f_0 = 0.88 \pm 0.11 (stat) \pm 0.06 (syst) and f_+ = -0.15 \pm 0.07 (stat) \pm 0.06 (syst) with a correlation coefficient of -0.59. A measurement of f_0 (f_+) constraining f_+ (f_0) to its standard model value of 0.0 (0.7) yields f_0 = 0.70 \pm 0.07 (stat) \pm 0.04 (syst) (f_+ = -0.01 \pm 0.02 (stat) \pm 0.05 (syst)). All these results are consistent with standard model expectations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report a search for single top quark production with the CDF II detector using 2.1 fb-1 of integrated luminosity of pbar p collisions at sqrt{s}=1.96 TeV. The data selected consist of events characterized by large energy imbalance in the transverse plane and hadronic jets, and no identified electrons and muons, so the sample is enriched in W -> tau nu decays. In order to suppress backgrounds, additional kinematic and topological requirements are imposed through a neural network, and at least one of the jets must be identified as a b-quark jet. We measure an excess of signal-like events in agreement with the standard model prediction, but inconsistent with a model without single top quark production by 2.1 standard deviations (sigma), with a median expected sensitivity of 1.4 sigma. Assuming a top quark mass of 175 GeV/c2 and ascribing the excess to single top quark production, the cross section is measured to be 4.9+2.5-2.2(stat+syst)pb, consistent with measurements performed in independent datasets and with the standard model prediction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a search for the technicolor particles $\rho_{T}$ and $\pi_{T}$ in the process $p\bar{p} \to \rho_{T} \to W\pi_{T}$ at a center of mass energy of $\sqrt{s}=1.96 \mathrm{TeV}$. The search uses a data sample corresponding to approximately $1.9 \mathrm{fb}^{-1}$ of integrated luminosity accumulated by the CDF II detector at the Fermilab Tevatron. The event signature we consider is $W\to \ell\nu$ and $\pi_{T} \to b\bar{b}, b\bar{c}$ or $b\bar{u}$ depending on the $\pi_{T}$ charge. We select events with a single high-$p_T$ electron or muon, large missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with multiple $b$-tagging algorithms. The observed number of events and the invariant mass distributions are consistent with the standard model background expectations, and we exclude a region at 95% confidence level in the $\rho_T$-$\pi_T$ mass plane. As a result, a large fraction of the region $m(\rho_T) = 180$ - $250 \mathrm{GeV}/c^2$ and $m(\pi_T) = 95$ - $145 \mathrm{GeV}/c^2$ is excluded.