842 resultados para interval-valued
Resumo:
Many industry peak and professional bodies advocate students undertake professional work placements as a key work integrated learning (WIL) experience in accredited university degree courses. However, mismatched expectations and gaps in the way industry partners (IPs) are supported during these work placements can place these high-stake alliances at risk. A review of models and strategies supporting industry partners indicates many are contingent on the continued efforts of well-networked individuals in both universities and IP organisations to make these connections work. It is argued that whilst these individuals are highly valued they often end up representing a whole course or industry perspective, not just their area of expertise. Sustainable partnership principles and practices with shared responsibility across stakeholder groups are needed instead. This paper provides an overview of work placement approaches in the disciplines of business, engineering and urban development at an Australian, metropolitan university. Employing action research and participatory focus group methodologies, it gathers and articulates recommendations from associated IPs on practical suggestions and strategies to improve relationships and the resultant quality of placements.
Resumo:
The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality can be influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigation of four urban residential catchments and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling outcomes indicate that selecting smaller average recurrence interval (ARI) events with high intensity-short duration as the threshold for the treatment system design is the most feasible since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of rainfall events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.
Resumo:
Background: Genome-wide association studies (GWAS) have identified more than 100 genetic loci for various cancers. However, only one is for endometrial cancer. Methods: We conducted a three-stage GWAS including 8,492 endometrial cancer cases and 16,596 controls. After analyzing 585,963 single-nucleotide polymorphisms (SNP) in 832 cases and 2,682 controls (stage I) from the Shanghai Endometrial Cancer Genetics Study, we selected the top 106 SNPs for in silico replication among 1,265 cases and 5,190 controls from the Australian/British Endometrial Cancer GWAS (stage II). Nine SNPs showed results consistent in direction with stage I with P < 0.1. These nine SNPs were investigated among 459 cases and 558 controls (stage IIIa) and six SNPs showed a direction of association consistent with stages I and II. These six SNPs, plus two additional SNPs selected on the basis of linkage disequilibrium and P values in stage II, were investigated among 5,936 cases and 8,166 controls from an additional 11 studies (stage IIIb). Results: SNP rs1202524, near the CAPN9 gene on chromosome 1q42.2, showed a consistent association with endometrial cancer risk across all three stages, with ORs of 1.09 [95% confidence interval (CI), 1.03–1.16] for the A/G genotype and 1.17 (95% CI, 1.05–1.30) for the G/G genotype (P = 1.6 × 10−4 in combined analyses of all samples). The association was stronger when limited to the endometrioid subtype, with ORs (95% CI) of 1.11 (1.04–1.18) and 1.21 (1.08–1.35), respectively (P = 2.4 × 10−5). Conclusions: Chromosome 1q42.2 may host an endometrial cancer susceptibility locus. Impact: This study identified a potential genetic locus for endometrial cancer risk
Resumo:
The purpose of this qualitative interpretative case study was to explore how the National Assessment Program – Literacy and Numeracy (NAPLAN) requirements may be affecting pedagogies of two Year 3, Year 5 and Year 7 teachers at two Queensland schools. The perceived problem was that standardised assessment NAPLAN practices and its growing status as a key measure of education quality throughout Australia has the potential to limit the everyday literacy and numeracy practices of teachers to instructional methods primarily focused on teaching to the test. The findings demonstrate how increased explicit teaching of NAPLAN content and procedural knowledge prior to testing has the potential to negatively impact on the teaching of everyday literacy and numeracy skills and knowledge that extend beyond those concerned with NAPLAN. Such teaching limited opportunity for what teachers reported as valued collaborative learning contexts aiming for long-term literacy and numeracy results.
Resumo:
Background and significance: Nurses' job dissatisfaction is associated with negative nursing and patient outcomes. One of the most powerful reasons for nurses to stay in an organisation is satisfaction with leadership. However, nurses are frequently promoted to leadership positions without appropriate preparation for the role. Although a number of leadership programs have been described, none have been tested for effectiveness, using a randomised control trial methodology. Aims: The aims of this research were to develop an evidence based leadership program and to test its effectiveness on nurse unit managers' (NUMs') and nursing staff's (NS's) job satisfaction, and on the leader behaviour scores of nurse unit managers. Methods: First, the study used a comprehensive literature review to examine the evidence on job satisfaction, leadership and front-line manager competencies. From this evidence a summary of leadership practices was developed to construct a two component leadership model. The components of this model were then combined with the evidence distilled from previous leadership development programs to develop a Leadership Development Program (LDP). This evidence integrated the program's design, its contents, teaching strategies and learning environment. Central to the LDP were the evidence-based leadership practices associated with increasing nurses' job satisfaction. A randomised controlled trial (RCT) design was employed for this research to test the effectiveness of the LDP. A RCT is one of the most powerful tools of research and the use of this method makes this study unique, as a RCT has never been used previously to evaluate any leadership program for front-line nurse managers. Thirty-nine consenting nurse unit managers from a large tertiary hospital were randomly allocated to receive either the leadership program or only the program's written information about leadership. Demographic baseline data were collected from participants in the NUM groups and the nursing staff who reported to them. Validated questionnaires measuring job satisfaction and leader behaviours were administered at baseline, at three months after the commencement of the intervention and at six months after the commencement of the intervention, to the nurse unit managers and to the NS. Independent and paired t-tests were used to analyse continuous outcome variables and Chi Square tests were used for categorical data. Results: The study found that the nurse unit managers' overall job satisfaction score was higher at 3-months (p = 0.016) and at 6-months p = 0.027) post commencement of the intervention in the intervention group compared with the control group. Similarly, at 3-months testing, mean scores in the intervention group were higher in five of the six "positive" sub-categories of the leader behaviour scale when compared to the control group. There was a significant difference in one sub-category; effectiveness, p = 0.015. No differences were observed in leadership behaviour scores between groups by 6-months post commencement of the intervention. Over time, at three month and six month testing there were significant increases in four transformational leader behaviour scores and in one positive transactional leader behaviour scores in the intervention group. Over time at 3-month testing, there were significant increases in the three leader behaviour outcome scores, however at 6-months testing; only one of these leader behaviour outcome scores remained significantly increased. Job satisfaction scores were not significantly increased between the NS groups at three months and at six months post commencement of the intervention. However, over time within the intervention group at 6-month testing there was a significant increase in job satisfaction scores of NS. There were no significant increases in NUM leader behaviour scores in the intervention group, as rated by the nursing staff who reported to them. Over time, at 3-month testing, NS rated nurse unit managers' leader behaviour scores significantly lower in two leader behaviours and two leader behaviour outcome scores. At 6-month testing, over time, one leader behaviour score was rated significantly lower and the nontransactional leader behaviour was rated significantly higher. Discussion: The study represents the first attempt to test the effectiveness of a leadership development program (LDP) for nurse unit managers using a RCT. The program's design, contents, teaching strategies and learning environment were based on a summary of the literature. The overall improvement in role satisfaction was sustained for at least 6-months post intervention. The study's results may reflect the program's evidence-based approach to developing the LDP, which increased the nurse unit managers' confidence in their role and thereby their job satisfaction. Two other factors possibly contributed to nurse unit managers' increased job satisfaction scores. These are: the program's teaching strategies, which included the involvement of the executive nursing team of the hospital, and the fact that the LDP provided recognition of the importance of the NUM role within the hospital. Consequently, participating in the program may have led to nurse unit managers feeling valued and rewarded for their service; hence more satisfied. Leadership behaviours remaining unchanged between groups at the 6 months data collection time may relate to the LDP needing to be conducted for a longer time period. This is suggested because within the intervention group, over time, at 3 and 6 months there were significant increases in self-reported leader behaviours. The lack of significant changes in leader behaviour scores between groups may equally signify that leader behaviours require different interventions to achieve change. Nursing staff results suggest that the LDP's design needs to consider involving NS in the program's aims and progress from the outset. It is also possible that by including regular feedback from NS to the nurse unit managers during the LDP that NS's job satisfaction and their perception of nurse unit managers' leader behaviours may alter. Conclusion/Implications: This study highlights the value of providing an evidence-based leadership program to nurse unit managers to increase their job satisfaction. The evidence based leadership program increased job satisfaction but its effect on leadership behaviour was only seen over time. Further research is required to test interventions which attempt to change leader behaviours. Also further research on NS' job satisfaction is required to test the indirect effects of LDP on NS whose nurse unit managers participate in LDPs.
Resumo:
One of the challenges confronting contemporary education internationally is to ensure that students are provided with opportunities to make informed choices about future careers and to acquire the capacity to transition into these careers. Schools need to manage their curricula, teacher capacity, timetables, and diversity of student populations by offering pathways that are seen as engaging and meaningful to life beyond schooling. Traditionally, education in the senior years has privileged those students who intend to progress to advanced studies at university or in other professional careers. In more recent times, in response the need for more sophisticated technical knowledge in the trades and a growing skills shortages in these fields, schools have paid more attention to vocational education. It has been argued that the vocational aspect of the school curriculum is less well understood and poorly implemented in comparison with the traditional academic curricula. One attempt to address this issue is through the establishment of school-industry partnerships. This paper explores the process of knowledge transfer between industry and schools in these partnerships. The paper theorises how knowledge that is valued and foundational in workplace employment can inform school curricula and pedagogical practices. The paper draws on theories of organisational knowledge, workplace learning and experiential learning to explore strategies that enhance school-to-employment transition outcomes.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
Background Heat-related impacts may have greater public health implications as climate change continues. It is important to appropriately characterize the relationship between heatwave and health outcomes. However, it is unclear whether a case-crossover design can be effectively used to assess the event- or episode-related health effects. This study examined the association between exposure to heatwaves and mortality and emergency hospital admissions (EHAs) from non-external causes in Brisbane, Australia, using both case-crossover and time series analyses approaches. Methods Poisson generalised additive model (GAM) and time-stratified case-crossover analyses were used to assess the short-term impact of heatwaves on mortality and EHAs. Heatwaves exhibited a significant impact on mortality and EHAs after adjusting for air pollution, day of the week, and season. Results For time-stratified case-crossover analysis, odds ratios of mortality and EHAs during heatwaves were 1.62 (95% confidence interval (CI): 1.36–1.94) and 1.22 (95% CI: 1.14–1.30) at lag 1, respectively. Time series GAM models gave similar results. Relative risks of mortality and EHAs ranged from 1.72 (95% CI: 1.40–2.11) to 1.81 (95% CI: 1.56–2.10) and from 1.14 (95% CI: 1.06–1.23) to 1.28 (95% CI: 1.21–1.36) at lag 1, respectively. The risk estimates gradually attenuated after the lag of one day for both case-crossover and time series analyses. Conclusions The risk estimates from both case-crossover and time series models were consistent and comparable. This finding may have implications for future research on the assessment of event- or episode-related (e.g., heatwave) health effects.
Resumo:
Background The association between temperature and mortality has been examined mainly in North America and Europe. However, less evidence is available in developing countries, especially in Thailand. In this study, we examined the relationship between temperature and mortality in Chiang Mai city, Thailand, during 1999–2008. Method A time series model was used to examine the effects of temperature on cause-specific mortality (non-external, cardiopulmonary, cardiovascular, and respiratory) and age-specific non-external mortality (<=64, 65–74, 75–84, and > =85 years), while controlling for relative humidity, air pollution, day of the week, season and long-term trend. We used a distributed lag non-linear model to examine the delayed effects of temperature on mortality up to 21 days. Results We found non-linear effects of temperature on all mortality types and age groups. Both hot and cold temperatures resulted in immediate increase in all mortality types and age groups. Generally, the hot effects on all mortality types and age groups were short-term, while the cold effects lasted longer. The relative risk of non-external mortality associated with cold temperature (19.35°C, 1st percentile of temperature) relative to 24.7°C (25th percentile of temperature) was 1.29 (95% confidence interval (CI): 1.16, 1.44) for lags 0–21. The relative risk of non-external mortality associated with high temperature (31.7°C, 99th percentile of temperature) relative to 28°C (75th percentile of temperature) was 1.11 (95% CI: 1.00, 1.24) for lags 0–21. Conclusion This study indicates that exposure to both hot and cold temperatures were related to increased mortality. Both cold and hot effects occurred immediately but cold effects lasted longer than hot effects. This study provides useful data for policy makers to better prepare local responses to manage the impact of hot and cold temperatures on population health.
Resumo:
Purpose Commencing selected workouts with low muscle glycogen availability augments several markers of training adaptation compared with undertaking the same sessions with normal glycogen content. However, low glycogen availability reduces the capacity to perform high-intensity (>85% of peak aerobic power (V·O2peak)) endurance exercise. We determined whether a low dose of caffeine could partially rescue the reduction in maximal self-selected power output observed when individuals commenced high-intensity interval training with low (LOW) compared with normal (NORM) glycogen availability. Methods Twelve endurance-trained cyclists/triathletes performed four experimental trials using a double-blind Latin square design. Muscle glycogen content was manipulated via exercise–diet interventions so that two experimental trials were commenced with LOW and two with NORM muscle glycogen availability. Sixty minutes before an experimental trial, subjects ingested a capsule containing anhydrous caffeine (CAFF, 3 mg-1·kg-1 body mass) or placebo (PLBO). Instantaneous power output was measured throughout high-intensity interval training (8 × 5-min bouts at maximum self-selected intensity with 1-min recovery). Results There were significant main effects for both preexercise glycogen content and caffeine ingestion on power output. LOW reduced power output by approximately 8% compared with NORM (P < 0.01), whereas caffeine increased power output by 2.8% and 3.5% for NORM and LOW, respectively, (P < 0.01). Conclusion We conclude that caffeine enhanced power output independently of muscle glycogen concentration but could not fully restore power output to levels commensurate with that when subjects commenced exercise with normal glycogen availability. However, the reported increase in power output does provide a likely performance benefit and may provide a means to further enhance the already augmented training response observed when selected sessions are commenced with reduced muscle glycogen availability. It has long been known that endurance training induces a multitude of metabolic and morphological adaptations that improve the resistance of the trained musculature to fatigue and enhance endurance capacity and/or exercise performance (13). Accumulating evidence now suggests that many of these adaptations can be modified by nutrient availability (9–11,21). Growing evidence suggests that training with reduced muscle glycogen using a “train twice every second day” compared with a more traditional “train once daily” approach can enhance the acute training response (29) and markers representative of endurance training adaptation after short-term (3–10 wk) training interventions (8,16,30). Of note is that the superior training adaptation in these previous studies was attained despite a reduction in maximal self-selected power output (16,30). The most obvious factor underlying the reduced intensity during a second training bout is the reduction in muscle glycogen availability. However, there is also the possibility that other metabolic and/or neural factors may be responsible for the power drop-off observed when two exercise bouts are performed in close proximity. Regardless of the precise mechanism(s), there remains the intriguing possibility that the magnitude of training adaptation previously reported in the face of a reduced training intensity (Hulston et al. (16) and Yeo et al.) might be further augmented, and/or other aspects of the training stimulus better preserved, if power output was not compromised. Caffeine ingestion is a possible strategy that might “rescue” the aforementioned reduction in power output that occurs when individuals commence high-intensity interval training (HIT) with low compared with normal glycogen availability. Recent evidence suggests that, at least in endurance-based events, the maximal benefits of caffeine are seen at small to moderate (2–3 mg·kg-1 body mass (BM)) doses (for reviews, see Refs. (3,24)). Accordingly, in this study, we aimed to determine the effect of a low dose of caffeine (3 mg·kg-1 BM) on maximal self-selected power output during HIT commenced with either normal (NORM) or low (LOW) muscle glycogen availability. We hypothesized that even under conditions of low glycogen availability, caffeine would increase maximal self-selected power output and thereby partially rescue the reduction in training intensity observed when individuals commence HIT with low glycogen availability.
Resumo:
Traditionally, infectious diseases and under-nutrition have been considered major health problems in Sri Lanka with little attention paid to obesity and associated non-communicable diseases (NCDs). However, the recent Sri Lanka Diabetes and Cardiovascular Study (SLDCS) reported the epidemic level of obesity, diabetes and metabolic syndrome. Moreover, obesity-associated NCDs is the leading cause of death in Sri Lanka and there is an exponential increase in hospitalization due to NCDs adversely affecting the development of the country. Despite Sri Lanka having a very high prevalence of NCDs and associated mortality, little is known about the causative factors for this burden. It is widely believed that the global NCD epidemic is associated with recent lifestyle changes, especially dietary factors. In the absence of sufficient data on dietary habits in Sri Lanka, successful interventions to manage these serious health issues would not be possible. In view of the current situation the dietary survey was undertaken to assess the intakes of energy, macro-nutrients and selected other nutrients with respect to socio demographic characteristics and the nutritional status of Sri Lankan adults especially focusing on obesity. Another aim of this study was to develop and validate a culturally specific food frequency questionnaire (FFQ) to assess dietary risk factors of NCDs in Sri Lankan adults. Data were collected from a subset of the national SLDCS using a multi-stage, stratified, random sampling procedure (n=500). However, data collection in the SLDCS was affected by the prevailing civil war which resulted in no data being collected from Northern and Eastern provinces. To obtain a nationally representative sample, additional subjects (n=100) were later recruited from the two provinces using similar selection criteria. Ethical Approval for this study was obtained from the Ethical Review Committee, Faculty of Medicine, University of Colombo, Sri Lanka and informed consent was obtained from the subjects before data were collected. Dietary data were obtained using the 24-h Dietary Recall (24HDR) method. Subjects were asked to recall all foods and beverages, consumed over the previous 24-hour period. Respondents were probed for the types of foods and food preparation methods. For the FFQ validation study, a 7-day weight diet record (7-d WDR) was used as the reference method. All foods recorded in the 24 HDR were converted into grams and then intake of energy and nutrients were analysed using NutriSurvey 2007 (EBISpro, Germany) which was modified for Sri Lankan food recipes. Socio-demographic details and body weight perception were collected from interviewer-administrated questionnaire. BMI was calculated and overweight (BMI ≥23 kg.m-2), obesity (BMI ≥25 kg.m-2) and abdominal obesity (Men: WC ≥ 90 cm; Women: WC ≥ 80 cm) were categorized according to Asia-pacific anthropometric cut-offs. The SPSS v. 16 for Windows and Minitab v10 were used for statistical analysis purposes. From a total of 600 eligible subjects, 491 (81.8%) participated of whom 34.5% (n=169) were males. Subjects were well distributed among different socio-economic parameters. A total of 312 different food items were recorded and nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Seventy-seven subjects completed (response rate = 65%) the FFQ and 7-day WDR. Estimated mean energy intake (SD) from FFQ (1794±398 kcal) and 7DWR (1698±333 kcal, P<0.001) was significantly different due to a significant overestimation of carbohydrate (~10 g/d, P<0.001) and to some extent fat (~5 g/d, NS). Significant positive correlations were found between the FFQ and 7DWR for energy (r = 0.39), carbohydrate (r = 0.47), protein (r = 0.26), fat (r =0.17) and dietary fiber (r = 0.32). Bland-Altman graphs indicated fairly good agreement between methods with no relationship between bias and average intake of each nutrient examined. The findings from the nutrition survey showed on average, Sri Lankan adults consumed over 14 portions of starch/d; moreover, males consumed 5 more portions of cereal than females. Sri Lankan adults consumed on average 3.56 portions of added sugars/d. Moreover, mean daily intake of fruit (0.43) and vegetable (1.73) portions was well below minimum dietary recommendations (fruits 2 portions/d; vegetables 3 portions/d). The total fruit and vegetable intake was 2.16 portions/d. Daily consumption of meat or alternatives was 1.75 portions and the sum of meat and pulses was 2.78 portions/d. Starchy foods were consumed by all participants and over 88% met the minimum daily recommendations. Importantly, nearly 70% of adults exceeded the maximum daily recommendation for starch (11portions/d) and a considerable proportion consumed larger numbers of starch servings daily, particularly men. More than 12% of men consumed over 25 starch servings/d. In contrast to their starch consumption, participants reported very low intakes of other food groups. Only 11.6%, 2.1% and 3.5% of adults consumed the minimum daily recommended servings of vegetables, fruits, and fruits and vegetables combined, respectively. Six out of ten adult Sri Lankans sampled did not consume any fruits. Milk and dairy consumption was extremely low; over a third of the population did not consume any dairy products and less than 1% of adults consumed 2 portions of dairy/d. A quarter of Sri Lankans did not report consumption of meat and pulses. Regarding protein consumption, 36.2% attained the minimum Sri Lankan recommendation for protein; and significantly more men than women achieved the recommendation of ≥3 servings of meat or alternatives daily (men 42.6%, women 32.8%; P<0.05). Over 70% of energy was derived from carbohydrates (Male:72.8±6.4%, Female:73.9±6.7%), followed by fat (Male:19.9±6.1%, Female:18.5±5.7%) and proteins (Male:10.6±2.1%, Female:10.9±5.6%). The average intake of dietary fiber was 21.3 g/day and 16.3 g/day for males and females, respectively. There was a significant difference in nutritional intake related to ethnicities, areas of residence, education levels and BMI categories. Similarly, dietary diversity was significantly associated with several socio-economic parameters among Sri Lankan adults. Adults with BMI ≥25 kg.m-2 and abdominally obese Sri Lankan adults had the highest diet diversity values. Age-adjusted prevalence (95% confidence interval) of overweight, obesity, and abdominal obesity among Sri Lankan adults were 17.1% (13.8-20.7), 28.8% (24.8-33.1), and 30.8% (26.8-35.2), respectively. Men, compared with women, were less overweight, 14.2% (9.4-20.5) versus 18.5% (14.4-23.3), P = 0.03, less obese, 21.0% (14.9-27.7) versus 32.7% (27.6-38.2), P < .05; and less abdominally obese, 11.9% (7.4-17.8) versus 40.6% (35.1-46.2), P < .05. Although, prevalence of obesity has reached to epidemic level body weight misperception was common among Sri Lankan adults. Two-thirds of overweight males and 44.7% of females considered themselves as in "about right weight". Over one third of both male and female obese subjects perceived themselves as "about right weight" or "underweight". Nearly 32% of centrally obese men and women perceived that their waist circumference is about right. People who perceived overweight or very overweight (n = 154) only 63.6% tried to lose their body weight (n = 98), and quarter of adults seek advices from professionals (n = 39). A number of important conclusions can be drawn from this research project. Firstly, the newly developed FFQ is an acceptable tool for assessing the nutrient intake of Sri Lankans and will assist proper categorization of individuals by dietary exposure. Secondly, a substantial proportion of the Sri Lankan population does not consume a varied and balanced diet, which is suggestive of a close association between the nutrition-related NCDs in the country and unhealthy eating habits. Moreover, dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Lastly, although obesity is a major health issue among Sri Lankan adults, body weight misperception was common among underweight, healthy weight, overweight, and obese adults in Sri Lanka. Over 2/3 of overweight and 1/3 of obese Sri Lankan adults believe that they are in "right weight" or "under-weight" categories.
Resumo:
Objective: To evaluate the economic burden of malignant neoplasms in Shandong province in order to provide scientific evidence for policy-making. Methods: The main sources for this study were the data from the third sampling survey of death cause in 2006 and cancer prevalence survey in 2007 in Shandong province. The direct medical cost was calculated based on the survey data. The indirect cost due to mortality and morbidity were estimated with human capital approach based on the data of disability-adjusted life years derived from the two surveys and gross domestic product (GDP) data. The total economic burden was the sum of direct medical cost and indirect cost. The uncertainty analysis was conducted according to the methodology in global burden of disease study. Results: The total cost of cancer in Shandong province in 2006 estimated was 18 057 million Yuan RMB (95% confidence interval:16 817 - 19 393 million), which accounted for 0. 83% of the total GDP. The direct medical cost,indirect mortality cost and indirect morbidity cost accounted for 17.28%, 78.53%, and 4.20% of total economic burden of malignant neoplasms, respectively. Liver,lung and stomach cancer were the top three tumors with heavier economic burden, with accounted for more than one half (57. 83%) of the total economic burden of all cancers. The uncertainty of total burden estimated was around ± 7%, which mainly derived from the uncertainty of indirect economic burden. Conclusion: The influence of cancers on social economy is dominated by the loss of productivity, especially by the productivity loss due to premature death. Liver, lung and stomach cancer are the major cancers for disease control and prevention in Shandong province. Abstract in Chinese 目的 评价山东省恶性肿瘤经济负担,为卫生决策提供科学依据. 方法 以2006年山东省第3次死因回顾抽样凋查资料和2007年山东省恶性肿瘤现患状况抽样调查资料为基础,测算全省直接医疗成本;采用人力资本法测算死亡间接负担和伤残间接负担;参考全球疾病负担研究的方法对测算结果的不确定性进行分析. 结果 2006年山东省因恶性肿瘤导致的总经济负担为180.57亿元(95%CI=16 817~19 393),占全省GDP总量的0.83%,其中直接医疗成本占总负担的17.28%,死亡造成的间接经济负担占78.53%,伤残所致间接经济负担占4.20%;肝癌、肺癌和胃癌为山东省经济负担最重的3种恶性肿瘤,总经济负担合计占全部恶性肿瘤的57.83%;总经济负担估计结果的不确定性范围在±7%左右,主要取决于间接经济负担的不确定性. 结论 恶性肿瘤对社会经济的影响主要通过生产力的损失产生作用,并以死亡所致生产力损失为主;肝癌、肺癌和胃癌应是山东省恶性肿瘤预防控制的重点.
Resumo:
Background Australian Indigenous children are the only population worldwide to receive the 7-valent pneumococcal conjugate vaccine (7vPCV) at 2, 4, and 6 months of age and the 23-valent pneumococcal polysaccharide vaccine (23vPPV) at 18 months of age. We evaluated this program's effectiveness in reducing the risk of hospitalization for acute lower respiratory tract infection (ALRI) in Northern Territory (NT) Indigenous children aged 5-23 months. Methods We conducted a retrospective cohort study involving all NT Indigenous children born from 1 April 2000 through 31 October 2004. Person-time at-risk after 0, 1, 2, and 3 doses of 7vPCV and after 0 and 1 dose of 23vPPV and the number of ALRI following each dose were used to calculate dose-specific rates of ALRI for children 5-23 months of age. Rates were compared using Cox proportional hazards models, with the number of doses of each vaccine serving as time-dependent covariates. Results There were 5482 children and 8315 child-years at risk, with 2174 episodes of ALRI requiring hospitalization (overall incidence, 261 episodes per 1000 child-years at risk). Elevated risk of ALRI requiring hospitalization was observed after each dose of the 7vPCV vaccine, compared with that for children who received no doses, and an even greater elevation in risk was observed after each dose of the 23vPPV ( adjusted hazard ratio [HR] vs no dose, 1.39; 95% confidence interval [CI], 1.12-1.71;). Risk was highest among children Pp. 002 vaccinated with the 23vPPV who had received < 3 doses of the 7vPCV (adjusted HR, 1.81; 95% CI, 1.32-2.48). Conclusions Our results suggest an increased risk of ALRI requiring hospitalization after pneumococcal vaccination, particularly after receipt of the 23vPPV booster. The use of the 23vPPV booster should be reevaluated.
Resumo:
Background: Mitomycin C and etoposide have both demonstrated activity against gastric carcinoma. Etoposide is a topoisomerase II inhibitor with evidence for phase-specific and schedule-dependent activity. Patients and method. Twenty-eight consecutive patients with advanced upper gastrointestinal adenocarcinoma were treated with intravenous (i.v.) bolus mitomycin C 6 mg/m2 on day 1 every 21 days to a maximum of four courses. Oral etoposide capsules 50 mg b.i.d. (or 35 mg b.i.d. liquid) were administered days 1 to 10 extending to 14 days in subsequent courses if absolute neutrophil count >1.5 x 109/l on day 14 of first course, for up to six courses. Results: Twenty-six patients were assessed for response of whom 12 had measurable disease and 14 evaluable disease. Four patients had a documented response (one complete remission, three partial remissions) with an objective response rate of 15% (95% confidence interval (95% CI) 4%-35%). Eight patients had stable disease and 14 progressive disease. The median survival was six months. The schedule was well tolerated with no treatment-related deaths. Nine patients experienced leucopenia (seven grade II and two grade III). Nausea and vomiting (eight grade II, one grade III), fatigue (eight grade II, two grade III) and anaemia (seven grade II, two grade III) were the predominant toxicities. Conclusion: This out-patient schedule is well tolerated and shows modest activity in the treatment of advanced upper gastrointestinal adenocarcinoma. Further studies using protracted schedules of etoposide both orally and as infusional treatment should be developed.
Resumo:
Purpose: The objective of the study was to assess the bioequivalence of two tablet formulations of capecitabine and to explore the effect of age, gender, body surface area and creatinine clearance on the systemic exposure to capecitabine and its metabolites. Methods: The study was designed as an open, randomized two-way crossover trial. A single oral dose of 2000 mg capecitabine was administered on two separate days to 25 patients with solid tumors. On one day, the patients received four 500-mg tablets of formulation B (test formulation) and on the other day, four 500-mg tablets of formulation A (reference formulation). The washout period between the two administrations was between 2 and 8 days. After each administration, serial blood and urine samples were collected for up to 12 and 24 h, respectively. Unchanged capecitabine and its metabolites were determined in plasma using LC/MS-MS and in urine by NMRS. Results: Based on the primary pharmacokinetic parameter, AUC(0-∞) of 5'-DFUR, equivalence was concluded for the two formulations, since the 90% confidence interval of the estimate of formulation B relative to formulation A of 97% to 107% was within the acceptance region 80% to 125%. There was no clinically significant difference between the t(max) for the two formulations (median 2.1 versus 2.0 h). The estimate for C(max) was 111% for formulation B compared to formulation A and the 90% confidence interval of 95% to 136% was within the reference region 70% to 143%. Overall, these results suggest no relevant difference between the two formulations regarding the extent to which 5'-DFUR reached the systemic circulation and the rate at which 5'-DFUR appeared in the systemic circulation. The overall urinary excretions were 86.0% and 86.5% of the dose, respectively, and the proportion recovered as each metabolite was similar for the two formulations. The majority of the dose was excreted as FBAL (61.5% and 60.3%), all other chemical species making a minor contribution. Univariate and multivariate regression analysis to explore the influence of age, gender, body surface area and creatinine clearance on the log-transformed pharmacokinetic parameters AUC(0-∞) and C(max) of capecitabine and its metabolites revealed no clinically significant effects. The only statistically significant results were obtained for AUC(0-∞) and C(max) of intact drug and for C(max) of FBAL, which were higher in females than in males. Conclusion: The bioavailability of 5'-DFUR in the systemic circulation was practically identical after administration of the two tablet formulations. Therefore, the two formulations can be regarded as bioequivalent. The variables investigated (age, gender, body surface area, and creatinine clearance) had no clinically significant effect on the pharmacokinetics of capecitabine or its metabolites.