19 resultados para Instruction and study
em CentAUR: Central Archive University of Reading - UK
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
The work reported in this paper is motivated towards the development of a mathematical model for swarm systems based on macroscopic primitives. A pattern formation and transformation model is proposed. The pattern transformation model comprises two general methods for pattern transformation, namely a macroscopic transformation method and a mathematical transformation method. The problem of transformation is formally expressed and four special cases of transformation are considered. Simulations to confirm the feasibility of the proposed models and transformation methods are presented. Comparison between the two transformation methods is also reported.
Resumo:
Consider the statement "this project should cost X and has risk of Y". Such statements are used daily in industry as the basis for making decisions. The work reported here is part of a study aimed at providing a rational and pragmatic basis for such statements. Of particular interest are predictions made in the requirements and early phases of projects. A preliminary model has been constructed using Bayesian Belief Networks and in support of this, a programme to collect and study data during the execution of various software development projects commenced in May 2002. The data collection programme is undertaken under the constraints of a commercial industrial regime of multiple concurrent small to medium scale software development projects. Guided by pragmatism, the work is predicated on the use of data that can be collected readily by project managers; including expert judgements, effort, elapsed times and metrics collected within each project.
Resumo:
Life-history traits vary substantially across species, and have been demonstrated to affect substitution rates. We compute genomewide, branch-specific estimates of male mutation bias (the ratio of male-to-female mutation rates) across 32 mammalian genomes and study how these vary with life-history traits (generation time, metabolic rate, and sperm competition). We also investigate the influence of life-history traits on substitution rates at unconstrained sites across a wide phylogenetic range. We observe that increased generation time is the strongest predictor of variation in both substitution rates (for which it is a negative predictor) and male mutation bias (for which it is a positive predictor). Although less significant, we also observe that estimates of metabolic rate, reflecting replication-independent DNA damage and repair mechanisms, correlate negatively with autosomal substitution rates, and positively with male mutation bias. Finally, in contrast to expectations, we find no significant correlation between sperm competition and either autosomal substitution rates or male mutation bias. Our results support the important but frequently opposite effects of some, but not all, life history traits on substitution rates. KEY WORDS: Generation time, genome evolution, metabolic rate, sperm competition.
Resumo:
Introduction The rate of unplanned pregnancy in Australia remains high, which has contributed to Australia having one of the highest abortion rates of developed countries with an estimated 1 in 5 women having an abortion. The emergency contraceptive pill (ECP) offers a safe way of preventing unintended pregnancy after unprotected sex has occurred. While the ECP has been available over-the-counter in Australian pharmacies for over a decade, its use has not significantly increased. This paper presents a protocol for a qualitative study that aims to identify the barriers and facilitators to accessing the ECP from community pharmacies in Australia. Methods and analysis Data will be collected through one-on-one interviews that are semistructured and in-depth. Partnerships have been established with 2 pharmacy groups and 2 women's health organisations to aid with the recruitment of women and pharmacists for data collection purposes. Interview questions explore domains from the Theoretical Domains Framework in order to assess the factors aiding and/or hindering access to ECP from community pharmacies. Data collected will be analysed using deductive content analysis. The expected benefits of this study are that it will help develop evidence-based workforce interventions to strengthen the capacity and performance of community pharmacists as key ECP providers. Ethics and dissemination The findings will be disseminated to the research team and study partners, who will brainstorm ideas for interventions that would address barriers and facilitators to access identified from the interviews. Dissemination will also occur through presentations and peer-reviewed publications and the study participants will receive an executive summary of the findings. The study has been evaluated and approved by the Monash Human Research Ethics Committee.
Resumo:
Objectives: To examine doctors' (Experiment 1) and doctors' and lay people's (Experiment 2) interpretations of two sets of recommended verbal labels for conveying information about side effects incidence rates. Method: Both studies used a controlled empirical methodology in which participants were presented with a hypothetical, but realistic, scenario involving a prescribed medication that was said to be associated with either mild or severe side effects. The probability of each side effect was described using one of the five descriptors advocated by the European Union (Experiment 1) or one of the six descriptors advocated in Calman's risk scale (Experiment 2), and study participants were required to estimate (numerically) the probability of each side effect occurring. Key findings: Experiment 1 showed that the doctors significantly overestimated the risk of side effects occurring when interpreting the five EU descriptors, compared with the assigned probability ranges. Experiment 2 showed that both groups significantly overestimated risk when given the six Calman descriptors, although the degree of overestimation was not as great for the doctors as for the lay people. Conclusion: On the basis of our findings, we argue that we are still a long way from achieving a standardised language of risk for use by both professionals and the general public, although there might be more potential for use of standardised terms among professionals. In the meantime, the EU and other regulatory bodies and health professionals should be very cautious about advocating the use of particular verbal labels for describing medication side effects.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
Tremor is a clinical feature characterized by oscillations of a part of the body. The detection and study of tremor is an important step in investigations seeking to explain underlying control strategies of the central nervous system under natural (or physiological) and pathological conditions. It is well established that tremorous activity is composed of deterministic and stochastic components. For this reason, the use of digital signal processing techniques (DSP) which take into account the nonlinearity and nonstationarity of such signals may bring new information into the signal analysis which is often obscured by traditional linear techniques (e.g. Fourier analysis). In this context, this paper introduces the application of the empirical mode decomposition (EMD) and Hilbert spectrum (HS), which are relatively new DSP techniques for the analysis of nonlinear and nonstationary time-series, for the study of tremor. Our results, obtained from the analysis of experimental signals collected from 31 patients with different neurological conditions, showed that the EMD could automatically decompose acquired signals into basic components, called intrinsic mode functions (IMFs), representing tremorous and voluntary activity. The identification of a physical meaning for IMFs in the context of tremor analysis suggests an alternative and new way of detecting tremorous activity. These results may be relevant for those applications requiring automatic detection of tremor. Furthermore, the energy of IMFs was visualized as a function of time and frequency by means of the HS. This analysis showed that the variation of energy of tremorous and voluntary activity could be distinguished and characterized on the HS. Such results may be relevant for those applications aiming to identify neurological disorders. In general, both the HS and EMD demonstrated to be very useful to perform objective analysis of any kind of tremor and can therefore be potentially used to perform functional assessment.
Resumo:
Eddy-covariance measurements of carbon dioxide fluxes were taken semi-continuously between October 2006 and May 2008 at 190 m height in central London (UK) to quantify emissions and study their controls. Inner London, with a population of 8.2 million (~5000 inhabitants per km2) is heavily built up with 8% vegetation cover within the central boroughs. CO2 emissions were found to be mainly controlled by fossil fuel combustion (e.g. traffic, commercial and domestic heating). The measurement period allowed investigation of both diurnal patterns and seasonal trends. Diurnal averages of CO2 fluxes were found to be highly correlated to traffic. However changes in heating-related natural gas consumption and, to a lesser extent, photosynthetic activity that controlled the seasonal variability. Despite measurements being taken at ca. 22 times the mean building height, coupling with street level was adequate, especially during daytime. Night-time saw a higher occurrence of stable or neutral stratification, especially in autumn and winter, which resulted in data loss in post-processing. No significant difference was found between the annual estimate of net exchange of CO2 for the expected measurement footprint and the values derived from the National Atmospheric Emissions Inventory (NAEI), with daytime fluxes differing by only 3%. This agreement with NAEI data also supported the use of the simple flux footprint model which was applied to the London site; this also suggests that individual roughness elements did not significantly affect the measurements due to the large ratio of measurement height to mean building height.
Resumo:
We study weak solutions for a class of free-boundary problems which includes as a special case the classical problem of travelling gravity waves on water of finite depth. We show that such problems are equivalent to problems in fixed domains and study the regularity of their solutions. We also prove that in very general situations the free boundary is necessarily the graph of a function.
Resumo:
In two separate studies, the cholesterol-lowering efficacy of a diet high in monounsaturated fatty acids (MUFA) was evaluated by means of a randomized crossover trial. In both studies subjects were randomized to receive either a high-MUFA diet or the control diet first, which they followed for a period of 8 weeks; following a washout period of 4–6 weeks they were transferred onto the opposing diet for a further period of 8 weeks. In one study subjects were healthy middle-aged men (n 30), and in the other they were young men (n 23) with a family history of CHD recruited from two centres (Guildford and Dublin). The two studies were conducted over the same time period using identical foods and study designs. Subjects consumed 38% energy as fat, with 18% energy as MUFA and 10% as saturated fatty acids (MUFA diet), or 13% energy as MUFA and 16% as saturated fatty acids (control diet). The polyunsaturated fatty acid content of each diet was 7%. The diets were achieved by providing subjects with manufactured foods such as spreads, ‘ready meals’, biscuits, puddings and breads, which, apart from their fatty acid compositions, were identical for both diets. Subjects were blind to which of the diets they were following on both arms of the study. Weight changes on the diets were less than 1 kg. In the groups combined (n 53) mean total and LDL-cholesterol levels were significantly lower at the end of the MUFA diet than the control diet by 0×29 (SD 0×61) mmol/l (P,0×001) and 0×38 (SD 0×64) mmol/l (P, 0×0001) respectively. In middle-aged men these differences were due to a mean reduction in LDL-cholesterol of ¹11 (SD 12) % on the MUFA diet with no change on the control diet (¹1×1 (SD 10) %). In young men the differences were due to an increase in LDL-cholesterol concentration on the control diet of þ6×2 (SD 13) % and a decrease on the MUFA diet of ¹7×8 (SD 20) %. Differences in the responses of middle-aged and young men to the two diets did not appear to be due to differences in their habitual baseline diets which were generally similar, but appeared to reflect the lower baseline cholesterol concentrations in the younger men. There was a moderately strong and statistically significant inverse correlation between the change in LDLcholesterol concentration on each diet and the baseline fasting LDL-cholesterol concentration (r¹0×49; P,0×0005). In conclusion, diets in which saturated fat is partially replaced by MUFA can achieve significant reductions in total and LDL-cholesterol concentrations, even when total fat and energy intakes are maintained. The dietary approach used to alter fatty acid intakes would be appropriate for achieving reductions in saturated fat intakes in whole populations.
Resumo:
Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.