208 resultados para Sampling rates
Resumo:
Objective: To examine whether Chinese studies of child sexual abuse (CSA) in the general population show lower prevalence rates than other international studies, and whether certain features of these studies may help to account for variation in estimates. Methods: A meta-analysis and meta-regression were conducted on 27 studies found in the English and Chinese language peer reviewed journals that involved general populations of students or residents, estimated CSA prior to age 18, and specified rates for males or females individually. Results: Estimates for Chinese females were lower than the international composites. For total CSA for females, the Chinese pooled estimate was 15.3% (95% CI = 12.6–18.0) based on the meta-analysis of 24 studies, lower than the international estimate (Stoltenborgh, van IJzendoorn, Euser, & Bakermans-Kranenburg, 2011) but not significantly. For contact CSA for females, the pooled estimate was 9.5% (95% CI = 7.5–11.5), based on 16 studies, significantly lower than the international prevalence. For penetrative CSA for females, the pooled estimate was 1% (95% CI = 0.7–1.3), based on 15 studies, significantly lower than the international estimate of 15.1%. Chinese men reported significantly less penetrative CSA but significantly more total CSA than international estimates; while contact CSA reported by Chinese and international males appeared to be roughly equivalent. Chinese CSA prevalence estimates were lower in studies from urban areas and non-mainland areas (Hong Kong and Taiwan), and in surveys with larger and probability samples, multiple sites, face-to-face interview method and when using less widely used instruments. Conclusions: The findings to date justify further research into possible cultural and sociological reasons for lower risk of contact and penetrative sexual abuse of girls and less penetrative abuse of boys in China. Future research should examine sociological explanations, including patterns of supervision, sexual socialization and attitudes related to male sexual prowess. Practice implications: The findings suggest that future general population studies in China should use well validated instruments, avoid face-to-face interview formats and be careful to maintain methodological standards when sampling large populations over multiple sites.
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.
Resumo:
Introduction The culture in many team sports involves consumption of large amounts of alcohol after training/competition. The effect of such a practice on recovery processes underlying protein turnover in human skeletal muscle are unknown. We determined the effect of alcohol intake on rates of myofibrillar protein synthesis (MPS) following strenuous exercise with carbohydrate (CHO) or protein ingestion. Methods In a randomized cross-over design, 8 physically active males completed three experimental trials comprising resistance exercise (8×5 reps leg extension, 80% 1 repetition maximum) followed by continuous (30 min, 63% peak power output (PPO)) and high intensity interval (10×30 s, 110% PPO) cycling. Immediately, and 4 h post-exercise, subjects consumed either 500 mL of whey protein (25 g; PRO), alcohol (1.5 g·kg body mass−1, 12±2 standard drinks) co-ingested with protein (ALC-PRO), or an energy-matched quantity of carbohydrate also with alcohol (25 g maltodextrin; ALC-CHO). Subjects also consumed a CHO meal (1.5 g CHO·kg body mass−1) 2 h post-exercise. Muscle biopsies were taken at rest, 2 and 8 h post-exercise. Results Blood alcohol concentration was elevated above baseline with ALC-CHO and ALC-PRO throughout recovery (P<0.05). Phosphorylation of mTORSer2448 2 h after exercise was higher with PRO compared to ALC-PRO and ALC-CHO (P<0.05), while p70S6K phosphorylation was higher 2 h post-exercise with ALC-PRO and PRO compared to ALC-CHO (P<0.05). Rates of MPS increased above rest for all conditions (~29–109%, P<0.05). However, compared to PRO, there was a hierarchical reduction in MPS with ALC-PRO (24%, P<0.05) and with ALC-CHO (37%, P<0.05). Conclusion We provide novel data demonstrating that alcohol consumption reduces rates of MPS following a bout of concurrent exercise, even when co-ingested with protein. We conclude that alcohol ingestion suppresses the anabolic response in skeletal muscle and may therefore impair recovery and adaptation to training and/or subsequent performance.
Resumo:
This chapter will begin with a brief summary of some recent research in the field of comparative penology. This work will be examined to explore the benefits, difficulties and limits of attempting to link criminal justice issues to types of advanced democratic polities, with particular emphasis on political economies. This stream of comparative penology examines data such as imprisonment rates and levels of punitiveness in different countries, before drawing conclusions based on the patterns which seem to emerge. Foremost among these is that the high imprisoning countries tend to be the advanced western liberal democracies which have gone furthest in adopting neoliberal economic and social policies, as against the lower imprisonment rates of social democracies, which variably have attempted to temper free-market economic policies in various ways. Such work brings both social democracy and neoliberalism into focus as issues for, or subjects of, criminology. Not in the sense of new ‘brands’ of criminology but rather as an examination of the connections between the political projects of social democracy and neoliberalism, and issues of crime and criminal justice. In the new comparative penology, social democracy and neoliberalism are cast in opposition, simultaneously raising the questions of to what extent and how adequately both social democracy and neoliberalism have been constituted as subjects in criminology and whether dichotomy is the only available trope of analysis?
Resumo:
In this paper, a Bayesian hierarchical model is used to anaylze the female breast cancer mortality rates for the State of Missouri from 1969 through 2001. The logit transformations of the mortality rates are assumed to be linear over the time with additive spatial and age effects as intercepts and slopes. Objective priors of the hierarchical model are explored. The Bayesian estimates are quite robustness in terms change of the hyperparamaters. The spatial correlations are appeared in both intercepts and slopes.
Resumo:
The use of hierarchical Bayesian spatial models in the analysis of ecological data is increasingly prevalent. The implementation of these models has been heretofore limited to specifically written software that required extensive programming knowledge to create. The advent of WinBUGS provides access to Bayesian hierarchical models for those without the programming expertise to create their own models and allows for the more rapid implementation of new models and data analysis. This facility is demonstrated here using data collected by the Missouri Department of Conservation for the Missouri Turkey Hunting Survey of 1996. Three models are considered, the first uses the collected data to estimate the success rate for individual hunters at the county level and incorporates a conditional autoregressive (CAR) spatial effect. The second model builds upon the first by simultaneously estimating the success rate and harvest at the county level, while the third estimates the success rate and hunting pressure at the county level. These models are discussed in detail as well as their implementation in WinBUGS and the issues arising therein. Future areas of application for WinBUGS and the latest developments in WinBUGS are discussed as well.
Resumo:
The deposition of biological material (biofouling) onto polymeric contact lenses is thought to be a major contributor to lens discomfort and hence discontinuation of wear. We describe a method to characterize lipid deposits directly from worn contact lenses utilizing liquid extraction surface analysis coupled to tandem mass spectrometry (LESA-MS/MS). This technique effected facile and reproducible extraction of lipids from the contact lens surfaces and identified lipid molecular species representing all major classes present in human tear film. Our data show that LESA-MS/MS is a rapid and comprehensive technique for the characterization of lipid-related biofouling on polymer surfaces.
Resumo:
Global cereal production will need to increase by 50% to 70% to feed a world population of about 9 billion by 2050. This intensification is forecast to occur mostly in subtropical regions, where warm and humid conditions can promote high N2O losses from cropped soils. To secure high crop production without exacerbating N2O emissions, new nitrogen (N) fertiliser management strategies are necessary. This one-year study evaluated the efficacy of a nitrification inhibitor (3,4-dimethylpyrazole phosphate—DMPP) and different N fertiliser rates to reduce N2O emissions in a wheat–maize rotation in subtropical Australia. Annual N2O emissions were monitored using a fully automated greenhouse gas measuring system. Four treatments were fertilized with different rates of urea, including a control (40 kg-N ha−1 year−1), a conventional N fertiliser rate adjusted on estimated residual soil N (120 kg-N ha−1 year−1), a conventional N fertiliser rate (240 kg-N ha−1 year−1) and a conventional N fertiliser rate (240 kg-N ha−1 year−1) with nitrification inhibitor (DMPP) applied at top dressing. The maize season was by far the main contributor to annual N2O emissions due to the high soil moisture and temperature conditions, as well as the elevated N rates applied. Annual N2O emissions in the four treatments amounted to 0.49, 0.84, 2.02 and 0.74 kg N2O–N ha−1 year−1, respectively, and corresponded to emission factors of 0.29%, 0.39%, 0.69% and 0.16% of total N applied. Halving the annual conventional N fertiliser rate in the adjusted N treatment led to N2O emissions comparable to the DMPP treatment but extensively penalised maize yield. The application of DMPP produced a significant reduction in N2O emissions only in the maize season. The use of DMPP with urea at the conventional N rate reduced annual N2O emissions by more than 60% but did not affect crop yields. The results of this study indicate that: (i) future strategies aimed at securing subtropical cereal production without increasing N2O emissions should focus on the fertilisation of the summer crop; (ii) adjusting conventional N fertiliser rates on estimated residual soil N is an effective practice to reduce N2O emissions but can lead to substantial yield losses if the residual soil N is not assessed correctly; (iii) the application of DMPP is a feasible strategy to reduce annual N2O emissions from sub-tropical wheat–maize rotations. However, at the N rates tested in this study DMPP urea did not increase crop yields, making it impossible to recoup extra costs associated with this fertiliser. The findings of this study will support farmers and policy makers to define effective fertilisation strategies to reduce N2O emissions from subtropical cereal cropping systems while maintaining high crop productivity. More research is needed to assess the use of DMPP urea in terms of reducing conventional N fertiliser rates and subsequently enable a decrease of fertilisation costs and a further abatement of fertiliser-induced N2O emissions.
Resumo:
Current military conflicts are characterized by the use of the improvised explosive device. Improvements in personal protection, medical care, and evacuation logistics have resulted in increasing numbers of casualties surviving with complex musculoskeletal injuries, often leading to lifelong disability. Thus, there exists an urgent requirement to investigate the mechanism of extremity injury caused by these devices in order to develop mitigation strategies. In addition, the wounds of war are no longer restricted to the battlefield; similar injuries can be witnessed in civilian centers following a terrorist attack. Key to understanding such mechanisms of injury is the ability to deconstruct the complexities of an explosive event into a controlled, laboratory-based environment. In this article, a traumatic injury simulator, designed to recreate in the laboratory the impulse that is transferred to the lower extremity from an anti-vehicle explosion, is presented and characterized experimentally and numerically. Tests with instrumented cadaveric limbs were then conducted to assess the simulator’s ability to interact with the human in two mounting conditions, simulating typical seated and standing vehicle passengers. This experimental device will now allow us to (a) gain comprehensive understanding of the load-transfer mechanisms through the lower limb, (b) characterize the dissipating capacity of mitigation technologies, and (c) assess the bio-fidelity of surrogates.
Resumo:
Background Most studies examining determinants of rising rates of caesarean section have examined patterns in documented reasons for caesarean over time in a single location. Further insights could be gleaned from cross-cultural research that examines practice patterns in locations with disparate rates of caesarean section at a single time point. Methods We compared both rates of and main reason for pre-labour and intrapartum caesarean between England and Queensland, Australia, using data from retrospective cross-sectional surveys of women who had recently given birth in England (n = 5,250) and Queensland (n = 3,467). Results Women in Queensland were more likely to have had a caesarean birth (36.2%) than women in England (25.1% of births; OR = 1.44, 95% CI = 1.28-1.61), after adjustment for obstetric characteristics. Between-country differences were found for rates of pre-labour caesarean (21.2% vs. 12.2%) but not for intrapartum caesarean or assisted vaginal birth. Compared to women in England, women in Queensland with a history of caesarean were more likely to have had a pre-labour caesarean and more likely to have had an intrapartum caesarean, due only to a previous caesarean. Among women with no previous caesarean, Queensland women were more likely than women in England to have had a caesarean due to suspected disproportion and failure to progress in labour. Conclusions The higher rates of caesarean birth in Queensland are largely attributable to higher rates of caesarean for women with a previous caesarean, and for the main reason of having had a previous caesarean. Variation between countries may be accounted for by the absence of a single, comprehensive clinical guideline for caesarean section in Queensland. Keywords: Caesarean section; Childbirth; Pregnancy; Cross-cultural comparison; Vaginal birth after caesarean; Previous caesarean section; Patient-reported data; Quality improvement
Resumo:
Mammographic density (MD) adjusted for age and body mass index (BMI) is a strong heritable breast cancer risk factor; however, its biological basis remains elusive. Previous studies assessed MD-associated histology using random sampling approaches, despite evidence that high and low MD areas exist within a breast and are negatively correlated with respect to one another. We have used an image-guided approach to sample high and low MD tissues from within individual breasts to examine the relationship between histology and degree of MD. Image-guided sampling was performed using two different methodologies on mastectomy tissues (n = 12): (1) sampling of high and low MD regions within a slice guided by bright (high MD) and dark (low MD) areas in a slice X-ray film; (2) sampling of high and low MD regions within a whole breast using a stereotactically guided vacuum-assisted core biopsy technique. Pairwise analysis accounting for potential confounders (i.e. age, BMI, menopausal status, etc.) provides appropriate power for analysis despite the small sample size. High MD tissues had higher stromal (P = 0.002) and lower fat (P = 0.002) compositions, but no evidence of difference in glandular areas (P = 0.084) compared to low MD tissues from the same breast. High MD regions had higher relative gland counts (P = 0.023), and a preponderance of Type I lobules in high MD compared to low MD regions was observed in 58% of subjects (n = 7), but did not achieve significance. These findings clarify the histologic nature of high MD tissue and support hypotheses regarding the biophysical impact of dense connective tissue on mammary malignancy. They also provide important terms of reference for ongoing analyses of the underlying genetics of MD.
Resumo:
As part of a wider study to develop an ecosystem-health monitoring program for wadeable streams of south-eastern Queensland, Australia, comparisons were made regarding the accuracy, precision and relative efficiency of single-pass backpack electrofishing and multiple-pass electrofishing plus supplementary seine netting to quantify fish assemblage attributes at two spatial scales (within discrete mesohabitat units and within stream reaches consisting of multiple mesohabitat units). The results demonstrate that multiple-pass electrofishing plus seine netting provide more accurate and precise estimates of fish species richness, assemblage composition and species relative abundances in comparison to single-pass electrofishing alone, and that intensive sampling of three mesohabitat units (equivalent to a riffle-run-pool sequence) is a more efficient sampling strategy to estimate reach-scale assemblage attributes than less intensive sampling over larger spatial scales. This intensive sampling protocol was sufficiently sensitive that relatively small differences in assemblage attributes (<20%) could be detected with a high statistical power (1-β > 0.95) and that relatively few stream reaches (<4) need be sampled to accurately estimate assemblage attributes close to the true population means. The merits and potential drawbacks of the intensive sampling strategy are discussed, and it is deemed to be suitable for a range of monitoring and bioassessment objectives.
Resumo:
We examine some variations of standard probability designs that preferentially sample sites based on how easy they are to access. Preferential sampling designs deliver unbiased estimates of mean and sampling variance and will ease the burden of data collection but at what cost to our design efficiency? Preferential sampling has the potential to either increase or decrease sampling variance depending on the application. We carry out a simulation study to gauge what effect it will have when sampling Soil Organic Carbon (SOC) values in a large agricultural region in south-eastern Australia. Preferential sampling in this region can reduce the distance to travel by up to 16%. Our study is based on a dataset of predicted SOC values produced from a datamining exercise. We consider three designs and two ways to determine ease of access. The overall conclusion is that sampling performance deteriorates as the strength of preferential sampling increases, due to the fact the regions of high SOC are harder to access. So our designs are inadvertently targeting regions of low SOC value. The good news, however, is that Generalised Random Tessellation Stratification (GRTS) sampling designs are not as badly affected as others and GRTS remains an efficient design compared to competitors.
Resumo:
The aim of this work was to investigate changes in particle number concentration (PNC) within naturally ventilated primary school classrooms arising from local sources either within or adjacent to the classrooms. We quantify the rate at which ultrafine particles were emitted either from printing, grilling, heating or cleaning activities and the rate at which the particles were removed by both deposition and air exchange processes. At each of 25 schools in Brisbane, Australia, two weeks of measurements of PNC and CO2 were taken both outdoors and in the two classrooms. Bayesian regression modelling was employed in order to estimate the relevant rates and analyse the relationship between air exchange rate (AER), particle infiltration and the deposition rates of particle generated from indoor activities in the classrooms. During schooling hours, grilling events at the school tuckshop as well as heating and printing in the classrooms led to indoor PNCs being elevated by a factor of more than four, with emission rates of (2.51 ± 0.25) x 1011 p min-1, (8.99 ± 6.70) x 1011 p min-1 and (5.17 ± 2.00) x 1011 p min-1, respectively. During non-school hours, cleaning events elevated indoor PNC by a factor of above five, with an average emission rate of (2.09 ± 6.30) x 1011 p min-1. Particles were removed by both air exchange and deposition; chiefly by ventilation when AER > 0.7 h-1 and by deposition when AER < 0.7 h-1.
Resumo:
Self-reported health status measures are generally used to analyse Social Security Disability Insurance's (SSDI) application and award decisions as well as the relationship between its generosity and labour force participation. Due to endogeneity and measurement error, the use of self-reported health and disability indicators as explanatory variables in economic models is problematic. We employ county-level aggregate data, instrumental variables and spatial econometric techniques to analyse the determinants of variation in SSDI rates and explicitly account for the endogeneity and measurement error of the self-reported disability measure. Two surprising results are found. First, it is shown that measurement error is the dominating source of the bias and that the main source of measurement error is sampling error. Second, results suggest that there may be synergies for applying for SSDI when the disabled population is larger. © 2011 Taylor & Francis.