867 resultados para full Bayes (FB) hierarchical
Resumo:
Purpose Recovery is a critical link between acute reactions to work-stressors and the development of health-impairments in the long run. Even though recovery is particularly necessary when recovery opportunities during work are insufficient, research on recovery during weekends, is still scarce. To fill this gap we tested, whether the inability to psychologically detach from work mediates the effect of social stressors at work on sleep quality on Sunday night. Design/Methodology Sixty full-time employees participated in the study. Daily assessment included diaries on psychological detachment and ambulatory actigraphy to assess psychophysiological indicators of sleep quality. Results Hierarchical regression analyses revealed social stressors at work to be related with psychological detachment and with several sleep quality parameters on Sunday night. Furthermore, psychological detachment from work mediated the effect of social stressors at work on sleep quality. Limitations Methodological considerations regarding the use of actigraphy data should be taken into account. Research/Practical Implications Our results show that social stressors at work may lower resources just before people get started into the new working week. Originality/Value This is the first study to show that social stressors at work are an antecedent of psychological detachment on Sunday evening and of objective sleep quality on Sunday.
Resumo:
Previous research has shown that motion imagery draws on the same neural circuits that are involved in perception of motion, thus leading to a motion aftereffect (Winawer et al., 2010). Imagined stimuli can induce a similar shift in participants’ psychometric functions as neural adaptation due to a perceived stimulus. However, these studies have been criticized on the grounds that they fail to exclude the possibility that the subjects might have guessed the experimental hypothesis, and behaved accordingly (Morgan et al., 2012). In particular, the authors claim that participants can adopt arbitrary response criteria, which results in similar changes of the central tendency μ of psychometric curves as those shown by Winawer et al. (2010).
Resumo:
The objective of this study is to test the hypothesis that partial agonists produce less desensitization because they generate less of the active conformation of the $\beta\sb2$-adrenergic receptor ($\beta$AR) (R*) and in turn cause less $\beta$AR phosphorylation by beta adrenergic receptor kinase ($\beta$ARK) and less $\beta$AR internalization. In the present work, rates of desensitization, internalization, and phosphorylation caused by a series of $\beta$AR agonists were correlated with a quantitative measure, defined as coupling efficiency, of agonist-dependent $\beta$AR activation of adenylyl cyclase. These studies were preformed in HEK-293 cells overexpressing the $\beta$AR with hemagglutinin (HA) and 6-histidine (6HIS) epitopes introduced into the N- and C-termini respectively. Agonists chosen provided a 95-fold range of coupling efficiencies, and, relative to epinephrine, the best agonist, (100%) were fenoterol (42%), albuterol (4.9%), dobutamine (2.5%) and ephedrine (1.1%). At concentrations of these agonists yielding $>$90% receptor occupancy, the rate and extent of the rapid phase (0-30 min) of agonist induced desensitization of adenylyl cyclase followed the same order as coupling efficiency, that is, epinephrine $\ge$ fitnoterol $>$ albuterol $>$ dobutamine $>$ ephedrine. The rate of internalization, measured by a loss of surface receptors during desensitization, with respect to these agonists also followed the same order as the desensitization and exhibited a slight lag. Like desensitization and internalization, $\beta$AR phosphorylation exhibited a dependency on agonist strength. The two strongest agonists epinephrine and fenoterol provoked 11 to 13 fold increases in the level of $\beta$AR phosphorylation after just 1 min, whereas the weakest agonists dobutamine and ephedrine caused only 3 to 4 fold increases in phosphorylation. With longer treatment times, the level of $\beta$AR phosphorylation declined with the strong agonists, but progressively increased with the weaker partial agonists. The major conclusion drawn from this study is that the occupancy-dependent rate of receptor phosphorylation increases with agonist coupling efficiencies and that this is sufficient to explain the desensitization, internalization, and phosphorylation data obtained.^ The mechanism of activation and desensitization by the partial $\beta$AR agonist salmeterol was also examined in this study. This drug is extremely hydrophobic and its study presents possibly unique problems. To determine whether salmeterol induces desensitization of the $\beta$AR its action has been studied using our system. Employing the use of reversible antagonists it was found that salmeterol, which has an estimated coupling efficiency near that of albuterol caused $\beta$AR desensitization. This desensitization was much reduced relative to epinephrine. Consistent with its coupling efficiency, it was found to be similar to albuterol in its ability to induce internalization and phosphorylation of the $\beta$AR. (Abstract shortened by UMI.) ^
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
Resumo:
Ageing societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the hierarchical organization of bone. Here, we investigate the mechanical response under monotonic and cyclic compression of both single osteonal lamellae and macroscopic samples containing numerous osteons. Micropillar compression tests in a scanning electron microscope, microindentation and macroscopic compression tests were performed on dry ovine bone to identify the elastic modulus, yield stress, plastic deformation, damage accumulation and failure mechanisms. We found that isolated lamellae exhibit a plastic behaviour, with higher yield stress and ductility but no damage. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behaviour of bone at the microscale to a quasi-brittle response driven by the growth of cracks along interfaces or in the vicinity of pores at the macroscale.
Resumo:
Using 1.8 fb(-1) of pp collisions at a center- of- mass energy of 7 TeV recorded by the ATLAS detector at the Large Hadron Collider, we present measurements of the production cross sections of Upsilon(1S,2S,3S) mesons. Upsilon mesons are reconstructed using the dimuon decay mode. Total production cross sections for p(T) < 70 GeV and in the rapidity interval vertical bar y(Upsilon)vertical bar < 2. 25 are measured to be, 8.01 +/- 0.02 +/- 0.36 +/- 0.31 nb, 2.05 +/- 0.01 +/- 0.12 +/- 0.08 nb, and 0.92 +/- 0.01 +/- 0.07 +/- 0.04 nb, respectively, with uncertainties separated into statistical, systematic, and luminosity measurement effects. In addition, differential cross section times dimuon branching fractions for Upsilon(1S), Upsilon(2S), and Upsilon(3S) as a function of Upsilon transverse momentum pT and rapidity are presented. These cross sections are obtained assuming unpolarized production. If the production polarization is fully transverse or longitudinal with no azimuthal dependence in the helicity frame, the cross section may vary by approximately +/- 20%. If a nontrivial azimuthal dependence is considered, integrated cross sections may be significantly enhanced by a factor of 2 or more. We compare our results to several theoretical models of Upsilon meson production, finding that none provide an accurate description of our data over the full range of Upsilon transverse momenta accessible with this data set.
Resumo:
The ATLAS experiment at the LHC has measured the production cross section of events with two isolated photons in the final state, in proton-proton collisions at root s = 7 TeV. The full data set collected in 2011, corresponding to an integrated luminosity of 4.9 fb(-1), is used. The amount of background, from hadronic jets and isolated electrons, is estimated with data-driven techniques and subtracted. The total cross section, for two isolated photons with transverse energies above 25 GeV and 22 GeV respectively, in the acceptance of the electromagnetic calorimeter (vertical bar eta vertical bar < 1.37 and 1.52 < vertical bar eta vertical bar 2.37) and with an angular separation Delta R > 0.4, is 44.0(-4.2)(+3.2) pb. The differential cross sections as a function of the di-photon invariant mass, transverse momentum, azimuthal separation, and cosine of the polar angle of the largest transverse energy photon in the Collins-Soper di-photon rest frame are also measured. The results are compared to the prediction of leading-order parton-shower and next-to-leading-order and next-to-next-to-leading-order parton-level generators.
Resumo:
A search has been performed for the experimental signature of an isolated photon with high transverse momentum, at least one jet identified as originating from a bottom quark, and high missing transverse momentum. Such a final state may originate from supersymmetric models with gauge-mediated supersymmetry breaking in events in which one of a pair of higgsino-like neutralinos decays into a photon and a gravitino while the other decays into a Higgs boson and a gravitino. The search is performed using the full dataset of 7 TeV proton-proton collisions recorded with the ATLAS detector at the LHC in 2011, corresponding to an integrated luminosity of 4.7 fb(-1). A total of 7 candidate events are observed while 7.5 +/- 2.2 events are expected from the Standard Model background. The results of the search are interpreted in the context of general gauge mediation to exclude certain regions of a benchmark plane for higgsino-like neutralino production.
Resumo:
This paper presents the application of a variety of techniques to study jet substructure. The performance of various modified jet algorithms, or jet grooming techniques, for several jet types and event topologies is investigated for jets with transverse momentum larger than 300 GeV. Properties of jets subjected to the mass-drop filtering, trimming, and pruning algorithms are found to have reduced sensitivity to multiple proton-proton interactions, are more stable at high luminosity and improve the physics potential of searches for heavy boosted objects. Studies of the expected discrimination power of jet mass and jet substructure observables in searches for new physics are also presented. Event samples enriched in boosted W and Z bosons and top-quark pairs are used to study both the individual jet invariant mass scales and the efficacy of algorithms to tag boosted hadronic objects. The analyses presented use the full 2011 ATLAS dataset, corresponding to an integrated luminosity of 4.7 +/- 0.1 /fb from proton-proton collisions produced by the Large Hadron Collider at a center-of-mass energy of sqrt(s) = 7 TeV.
Resumo:
Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy root s = 7TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated luminosity of 4.8 fb(-1). Dijet masses up to similar to 4.0TeV have been probed. No resonance-like features have been observed in the dijet mass spectrum, and all angular distributions are consistent with the predictions of QCD. Exclusion limits on six hypotheses of new phenomena have been set at 95% CL in terms of mass or energy scale, as appropriate. These hypotheses include excited quarks below 2.83 TeV, colour octet scalars below 1.86TeV, heavy W bosons below 1.68 TeV, string resonances below 3.61 TeV, quantum black holes with six extra space-time dimensions for quantum gravity scales below 4.11 TeV, and quark contact interactions below a compositeness scale of 7.6 TeV in a destructive interference scenario.
Resumo:
A search for squarks and gluinos in final states containing jets, missing transverse momentum and no high-p(T) electrons or muons is presented. The data represent the complete sample recorded in 2011 by the ATLAS experiment in 7 TeV proton-proton collisions at the Large Hadron Collider, with a total integrated luminosity of 4.7 fb(-1). No excess above the Standard Model background expectation is observed. Gluino masses below 860 GeV and squark masses below 1320 GeV are excluded at the 95% confidence level in simplified models containing only squarks of the first two generations, a gluino octet and a massless neutralino, for squark or gluino masses below 2 TeV, respectively. Squarks and gluinos with equal masses below 1410 GeV are excluded. In minimal supergravity/constrained minimal supersymmetric Standard Model models with tan beta = 10, A(0) = 0 and mu > 0, squarks and gluinos of equal mass are excluded for masses below 1360 GeV. Constraints are also placed on the parameter space of supersymmetric models with compressed spectra. These limits considerably extend the region of supersymmetric parameter space excluded by previous measurements with the ATLAS detector.