223 resultados para Cut knives
Resumo:
In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.
Resumo:
Depression in childhood or adolescence is associated with increased rates of depression in adulthood. Does this justify efforts to detect (and treat) those with symptoms of depression in early childhood or adolescence? The aim of this study was to determine how well symptoms of anxiety/depression (A-D) in early childhood and adolescence predict adult mental health. The study sample is taken from a population-based prospective birth cohort study. Of the 8556 mothers initially approached to participate 8458 agreed, of whom 7223 mothers gave birth to a live singleton baby. Children were screened using modified Child Behaviour Checklist (CBCL) scales for internalizing and total problems (T-P) at age 5 and the CBCL and Youth Self Report (YSR) A-D subscale and T-P scale at age 14. At age 21, a sub-sample of 2563 young adults in this cohort were administered the CIDI-Auto. Results indicated that screening at age 5 would detect few later cases of significant mental ill-health. Using a cut-point of 20% for internalizing at child age 5 years the CBCL had sensitivities of only 25% and 18% for major depression and anxiety disorders at 21 years, respectively. At age 14, the YSR generally performed a little better than the CBCL as a screening instrument, but neither performed at a satisfactory level. Of the children who were categorised as having YSR A-D at 14 years 30% and 37% met DSM-IV criteria for major depression and anxiety disorders, respectively, at age 21. Our findings challenge an existing movement encouraging the detection and treatment of those with symptoms of mental illness in early childhood.
Resumo:
There is increasing concern about the impact of employees’ alcohol and other drug (AOD) consumption on workplace safety, particularly within the construction industry. No known study has scientifically evaluated the relationship between the use of drugs and alcohol and safety impacts in construction, and there has been only limited adoption of nationally coordinated strategies, supported by employers and employees to render it socially unacceptable to arrive at a construction workplace with impaired judgment from AODs. This research aims to scientifically evaluate the use of AODs within the Australian construction industry in order to reduce the potential resulting safety and performance impacts and engender a cultural change in the workforce. Using the Alcohol Use Disorders Identification Test (AUDIT), the study will adopt both quantitative and qualitative methods to evaluate the extent of general AOD use in the industry. Results indicate that a proportion of the construction sector may be at risk of hazardous alcohol consumption. A total of 286 respondents (58%) scored above the cut-off score for risky alcohol use with 43 respondents (15%) scoring in the significantly ‘at risk’ category. Other drug use was also identified as a major issue that must be addressed. Results support the need for evidence-based, preventative educational initiatives that are tailored specifically to the construction industry.
Resumo:
Essential hypertensives display enhanced signal transduction through pertussis toxin-sensitive G proteins. The T allele of a C825T variant in exon 10 of the G protein β3 subunit gene (GNB3) induces formation of a splice variant (Gβ3-s) with enhanced activity. The T allele of GNB3 was shown recently to be associated with hypertension in unselected German patients (frequency=0.31 versus 0.25 in control). To confirm and extend this finding in a different setting, we performed an association study in Australian white hypertensives. This involved an extensively examined cohort of 110 hypertensives, each of whom were the offspring of 2 hypertensive parents, and 189 normotensives whose parents were both normotensive beyond age 50 years. Genotyping was performed by polymerase chain reaction and digestion with BseDI, which either cut (C allele) or did not cut (T allele) the 268-bp polymerase chain reaction product. T allele frequency in the hypertensive group was 0.43 compared with 0.25 in the normotensive group (χ2=22; P=0.00002; odds ratio=2.3; 95% CI=1.7 to 3.3). The T allele tracked with higher pretreatment blood pressure: diastolic=105±7, 109±16, and 128±28 mm Hg (mean±SD) for CC, CT, and 7T, respectively (P=0.001 by 1-way ANOVA). Blood pressures were higher in female hypertensives with a T allele (P=0.006 for systolic and 0.0003 for diastolic by ANOVA) than they were in male hypertensives. In conclusion, the present study of a group with strong family history supports a role for a genetically determined, physiologically active splice variant of the G protein β3 subunit gene in the causation of essential hypertension.
Resumo:
Essential hypertension is a highly hereditable disorder in which genetic influences predominate over environmental factors. The molecular genetic profiles which predispose to essential hypertension are not known. In rats with genetic hypertension, there is some recent evidence pointing to linkage of renin gene alleles with blood pressure. The genes for renin and antithrombin III belong to a conserved synteny group which, in humans, spans the q21.3-32.3 region of chromosome I and, in rats, is linkage group X on chromosome 13. The present study examined the association of particular human renin gene (REN) and antithrombin III gene (AT3) polymorphisms with essential hypertension by comparing the frequency of specific alleles for each of these genes in 50 hypertensive offspring of hypertensive parents and 91 normotensive offspring of normotensive parents. In addition, linkage relationships were examined in hypertensive pedigrees with multiple affected individuals. Alleles of a REN HindIII restriction fragment length polymorphism (RFLP) were detected using a genomic clone, λHR5, to probe Southern blots of HindIII-cut leucocyte DNA, and those for an AT3 Pstl RFLP were detected by phATIII 113 complementary DNA probe. The frequencies of each REN allele in the hypertensive group were 0.76 and 0.24 compared with 0.74 and 0.26 in the normotensive group. For AT3, hypertensive allele frequencies were 0.49 and 0.51 compared with normotensive values of 0.54 and 0.46. These differences were not significant by χ2 analysis (P > 0.2). Linkage analysis of a family (data from 16 family members, 10 of whom were hypertensive), informative for both markers, without an age-of-onset correction, and assuming dominant inheritance of hypertension, complete penetrance and a disease frequency of 20%, did not indicate linkage of REN with hypertension, but gave a positive, although not significant, logarithm of the odds for linkage score of 0.784 at a recombination fraction of 0 for AT3 linkage to hypertension. In conclusion, the present study could find no evidence for an association of a REN HindIII RFLP with essential hypertension or for a linkage of the locus defined by this RFLP in a family segregating for hypertension. In the case of an AT3 Pstl RFLP, although association analysis was negative, linkage analysis suggested possible involvement (odds of 6:1 in favour) of a gene located near the 1q23 locus with hypertension in one informative family.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
A graph theoretic approach is developed for accurately computing haulage costs in earthwork projects. This is vital as haulage is a predominant factor in the real cost of earthworks. A variety of metrics can be used in our approach, but a fuel consumption proxy is recommended. This approach is novel as it considers the constantly changing terrain that results from cutting and filling activities and replaces inaccurate “static” calculations that have been used previously. The approach is also capable of efficiently correcting the violation of top down cutting and bottom up filling conditions that can be found in existing earthwork assignments and sequences. This approach assumes that the project site is partitioned into uniform blocks. A directed graph is then utilised to describe the terrain surface. This digraph is altered after each cut and fill, in order to reflect the true state of the terrain. A shortest path algorithm is successively applied to calculate the cost of each haul and these costs are summed to provide a total cost of haulage
Resumo:
The life course of Australian researchers includes regular funding applications, which incur large personal and time costs. We previously estimated that Australian researchers spent 550 years preparing 3,727 proposals for the 2012 NHMRC Project Grant funding round, at an estimated annual salary cost of AU$66 million. Despite the worldwide importance of funding rounds, there is little evidence on what researchers think of the application process. We conducted a web-based survey of Australian researchers (May–July 2013) asking about their experience with NHMRC Project Grants. Almost all researchers (n=224 at 31 May) supported changes to the application (96%) and peer-review (88%) processes; 73% supported the introduction of shorter initial Expressions of Interest; and half (50%) provided extensive comments on the NHMRC processes. Researchers agreed preparing their proposals always took top priority over other work (97%) and personal (87%) commitments. More than half (57%) provided extensive comments on the ongoing personal impact of concurrent grant-writing and holiday seasons on family, children and other relationships. Researchers with experience on Grant Review Panels (34%) or as External Reviewers (78%) reported many sections of the proposals were rarely or never read, which suggests these sections could be cut with no impact on the quality of peer review. Our findings provide evidence on the experience of Australian researchers as applicants. The process of preparing, submitting and reviewing proposals could be streamlined to minimise the burden on applicants and peer reviewers, giving Australian researchers more time to work on actual research and be with their families.
Resumo:
Aim Worldwide obesity levels have increased unprecedentedly over the past couple of decades. Although the prevalence, trends and associated socio-economic factors of the condition have been extensively reported in Western populations, less is known regarding South Asian populations. Methods A review of articles using Medline with combinations of the MeSH terms: 'Obesity', 'Overweight' and 'Abdominal Obesity' limiting to epidemiology and South Asian countries. Results Despite methodological heterogeneity and variation according to country, area of residence and gender , the most recent nationally representative and large regional data demonstrates that without any doubt there is a epidemic of obesity, overweight and abdominal obesity in South Asian countries. Prevalence estimates of overweight and obesity (based on Asian cut-offs: overweight ≥ 23 kg/m(2), obesity ≥ 25 kg/m(2)) ranged from 3.5% in rural Bangladesh to over 65% in the Maldives. Abdominal obesity was more prevalent than general obesity in both sexes in this ethnic group. Countries with the lowest prevalence had the highest upward trend of obesity. Socio-economic factors associated with greater obesity in the region included female gender, middle age, urban residence, higher educational and economic status. Conclusion South Asia is significantly affected by the obesity epidemic. Collaborative public health interventions to reverse these trends need to be mindful of many socio-economic constraints in order to provide long-term solutions.
Resumo:
Aim To develop clinical practice guidelines for nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory. Background Numerous studies have reported that nurse-administered procedural sedation and analgesia is safe. However, the broad scope of existing guidelines for the administration and monitoring of patients who receive sedation during medical procedures without an anaesthetist presents means there is a lack of specific guidance regarding optimal nursing practices for the unique circumstances in which nurse-administered procedural sedation and analgesia is used in the cardiac catheterisation laboratory. Methods A sequential mixed methods design was utilised. Initial recommendations were produced from three studies conducted by the authors: an integrative review; a qualitative study; and a cross-sectional survey. The recommendations were revised in accordance with responses from a modified Delphi study. The first Delphi round was completed by nine senior cardiac catheterisation laboratory nurses. All but one of the draft recommendations met the pre-determined cut-off point for inclusion. There were a total of 59 responses to the second round. Consensus was reached on all recommendations. Implications for nursing The guidelines that were derived from the Delphi study offer twenty four recommendations within six domains of nursing practice: Pre-procedural assessment; Pre-procedural patient and family education; Pre-procedural patient comfort; Intra-procedural patient comfort; Intra-procedural patient assessment and monitoring; and Post-procedural patient assessment and monitoring. Conclusion These guidelines provide an important foundation towards the delivery of safe, consistent and evidence-based nursing care for the many patients who receive sedation in the cardiac catheterisation laboratory setting.
Resumo:
Insulated Rail Joints (IRJs) are designed to electrically isolate two rails in rail tracks to control the signalling system for safer train operations. Unfortunately the gapped section of the IRJs is structurally weak and often fails prematurely especially in heavy haul tracks, which adversely affects service reliability and efficiency. The IRJs suffer from a number of failure modes; the railhead ratchetting at the gap is, however, regarded as the root cause and attended to in this thesis. Ratchetting increases with the increase in wheel loads; in the absence of a life prediction model, effective management of the IRJs for increased wagon wheel loads has become very challenging. Therefore, the main aim of this thesis is to determine method to predict IRJs' service life. The distinct discontinuity of the railhead at the gap makes the Hertzian theory and the rolling contact shakedown map, commonly used in the continuously welded rails, not applicable to examine the metal ratchetting of the IRJs. Finite Element (FE) technique is, therefore, used to explore the railhead metal ratchetting characteristics in this thesis, the boundary conditions of which has been determined from a full scale study of the IRJ specimens under rolling contact of the loaded wheels. A special purpose test set up containing full-scale wagon wheel was used to apply rolling wheel loads on the railhead edges of the test specimens. The state of the rail end face strains was determined using a non-contact digital imaging technique and used for calibrating the FE model. The basic material parameters for this FE model were obtained through independent uniaxial, monotonic tensile tests on specimens cut from the head hardened virgin rails. The monotonic tensile test data have been used to establish a cyclic load simulation model of the railhead steel specimen; the simulated cyclic load test has provided the necessary data for the three decomposed kinematic hardening plastic strain accumulation model of Chaboche. A performance based service life prediction algorithm for the IRJs was established using the plastic strain accumulation obtained from the Chaboche model. The predicted service lives of IRJs using this algorithm have agreed well with the published data. The finite element model has been used to carry out a sensitivity study on the effects of wheel diameter to the railhead metal plasticity. This study revealed that the depth of the plastic zone at the railhead edges is independent of the wheel diameter; however, large wheel diameter is shown to increase the IRJs' service life.
Resumo:
Traditionally, infectious diseases and under-nutrition have been considered major health problems in Sri Lanka with little attention paid to obesity and associated non-communicable diseases (NCDs). However, the recent Sri Lanka Diabetes and Cardiovascular Study (SLDCS) reported the epidemic level of obesity, diabetes and metabolic syndrome. Moreover, obesity-associated NCDs is the leading cause of death in Sri Lanka and there is an exponential increase in hospitalization due to NCDs adversely affecting the development of the country. Despite Sri Lanka having a very high prevalence of NCDs and associated mortality, little is known about the causative factors for this burden. It is widely believed that the global NCD epidemic is associated with recent lifestyle changes, especially dietary factors. In the absence of sufficient data on dietary habits in Sri Lanka, successful interventions to manage these serious health issues would not be possible. In view of the current situation the dietary survey was undertaken to assess the intakes of energy, macro-nutrients and selected other nutrients with respect to socio demographic characteristics and the nutritional status of Sri Lankan adults especially focusing on obesity. Another aim of this study was to develop and validate a culturally specific food frequency questionnaire (FFQ) to assess dietary risk factors of NCDs in Sri Lankan adults. Data were collected from a subset of the national SLDCS using a multi-stage, stratified, random sampling procedure (n=500). However, data collection in the SLDCS was affected by the prevailing civil war which resulted in no data being collected from Northern and Eastern provinces. To obtain a nationally representative sample, additional subjects (n=100) were later recruited from the two provinces using similar selection criteria. Ethical Approval for this study was obtained from the Ethical Review Committee, Faculty of Medicine, University of Colombo, Sri Lanka and informed consent was obtained from the subjects before data were collected. Dietary data were obtained using the 24-h Dietary Recall (24HDR) method. Subjects were asked to recall all foods and beverages, consumed over the previous 24-hour period. Respondents were probed for the types of foods and food preparation methods. For the FFQ validation study, a 7-day weight diet record (7-d WDR) was used as the reference method. All foods recorded in the 24 HDR were converted into grams and then intake of energy and nutrients were analysed using NutriSurvey 2007 (EBISpro, Germany) which was modified for Sri Lankan food recipes. Socio-demographic details and body weight perception were collected from interviewer-administrated questionnaire. BMI was calculated and overweight (BMI ≥23 kg.m-2), obesity (BMI ≥25 kg.m-2) and abdominal obesity (Men: WC ≥ 90 cm; Women: WC ≥ 80 cm) were categorized according to Asia-pacific anthropometric cut-offs. The SPSS v. 16 for Windows and Minitab v10 were used for statistical analysis purposes. From a total of 600 eligible subjects, 491 (81.8%) participated of whom 34.5% (n=169) were males. Subjects were well distributed among different socio-economic parameters. A total of 312 different food items were recorded and nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Seventy-seven subjects completed (response rate = 65%) the FFQ and 7-day WDR. Estimated mean energy intake (SD) from FFQ (1794±398 kcal) and 7DWR (1698±333 kcal, P<0.001) was significantly different due to a significant overestimation of carbohydrate (~10 g/d, P<0.001) and to some extent fat (~5 g/d, NS). Significant positive correlations were found between the FFQ and 7DWR for energy (r = 0.39), carbohydrate (r = 0.47), protein (r = 0.26), fat (r =0.17) and dietary fiber (r = 0.32). Bland-Altman graphs indicated fairly good agreement between methods with no relationship between bias and average intake of each nutrient examined. The findings from the nutrition survey showed on average, Sri Lankan adults consumed over 14 portions of starch/d; moreover, males consumed 5 more portions of cereal than females. Sri Lankan adults consumed on average 3.56 portions of added sugars/d. Moreover, mean daily intake of fruit (0.43) and vegetable (1.73) portions was well below minimum dietary recommendations (fruits 2 portions/d; vegetables 3 portions/d). The total fruit and vegetable intake was 2.16 portions/d. Daily consumption of meat or alternatives was 1.75 portions and the sum of meat and pulses was 2.78 portions/d. Starchy foods were consumed by all participants and over 88% met the minimum daily recommendations. Importantly, nearly 70% of adults exceeded the maximum daily recommendation for starch (11portions/d) and a considerable proportion consumed larger numbers of starch servings daily, particularly men. More than 12% of men consumed over 25 starch servings/d. In contrast to their starch consumption, participants reported very low intakes of other food groups. Only 11.6%, 2.1% and 3.5% of adults consumed the minimum daily recommended servings of vegetables, fruits, and fruits and vegetables combined, respectively. Six out of ten adult Sri Lankans sampled did not consume any fruits. Milk and dairy consumption was extremely low; over a third of the population did not consume any dairy products and less than 1% of adults consumed 2 portions of dairy/d. A quarter of Sri Lankans did not report consumption of meat and pulses. Regarding protein consumption, 36.2% attained the minimum Sri Lankan recommendation for protein; and significantly more men than women achieved the recommendation of ≥3 servings of meat or alternatives daily (men 42.6%, women 32.8%; P<0.05). Over 70% of energy was derived from carbohydrates (Male:72.8±6.4%, Female:73.9±6.7%), followed by fat (Male:19.9±6.1%, Female:18.5±5.7%) and proteins (Male:10.6±2.1%, Female:10.9±5.6%). The average intake of dietary fiber was 21.3 g/day and 16.3 g/day for males and females, respectively. There was a significant difference in nutritional intake related to ethnicities, areas of residence, education levels and BMI categories. Similarly, dietary diversity was significantly associated with several socio-economic parameters among Sri Lankan adults. Adults with BMI ≥25 kg.m-2 and abdominally obese Sri Lankan adults had the highest diet diversity values. Age-adjusted prevalence (95% confidence interval) of overweight, obesity, and abdominal obesity among Sri Lankan adults were 17.1% (13.8-20.7), 28.8% (24.8-33.1), and 30.8% (26.8-35.2), respectively. Men, compared with women, were less overweight, 14.2% (9.4-20.5) versus 18.5% (14.4-23.3), P = 0.03, less obese, 21.0% (14.9-27.7) versus 32.7% (27.6-38.2), P < .05; and less abdominally obese, 11.9% (7.4-17.8) versus 40.6% (35.1-46.2), P < .05. Although, prevalence of obesity has reached to epidemic level body weight misperception was common among Sri Lankan adults. Two-thirds of overweight males and 44.7% of females considered themselves as in "about right weight". Over one third of both male and female obese subjects perceived themselves as "about right weight" or "underweight". Nearly 32% of centrally obese men and women perceived that their waist circumference is about right. People who perceived overweight or very overweight (n = 154) only 63.6% tried to lose their body weight (n = 98), and quarter of adults seek advices from professionals (n = 39). A number of important conclusions can be drawn from this research project. Firstly, the newly developed FFQ is an acceptable tool for assessing the nutrient intake of Sri Lankans and will assist proper categorization of individuals by dietary exposure. Secondly, a substantial proportion of the Sri Lankan population does not consume a varied and balanced diet, which is suggestive of a close association between the nutrition-related NCDs in the country and unhealthy eating habits. Moreover, dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Lastly, although obesity is a major health issue among Sri Lankan adults, body weight misperception was common among underweight, healthy weight, overweight, and obese adults in Sri Lanka. Over 2/3 of overweight and 1/3 of obese Sri Lankan adults believe that they are in "right weight" or "under-weight" categories.
Resumo:
Public policymakers are caught in a dilemma : there is a growing list of urgent issues to address, at the same time that public expenditure is being cut. Adding to this dilemma is a system of government designed in the 19th century and competing theories of policymaking dating back to the 1950s. The interlinked problems of disaster risk management and climate change adaptation are cases in point. As the climate changes, there will be more frequent, intense and/or prolonged disasters such as floods and bushfires. Clearly a well integrated whole of government response is needed, but how might this be achieved? Further, how could academic research contribute to resolving this dilemma in a way that would produce something of theoretical interest as well as practical outcomes for policymakers? These are the questions addressed by our research via a comparative analysis of the 2009 Victorian bushfires, the 2011 Perth Hills bushfires, and the 2011 Brisbane floods. Our findings suggest that there is a need to: improve community engagement and communication; refocus attention on resilience; improve interagency communication and collaboration; and, develop institutional arrangements that support continual improvement and policy learning. These findings have implications for all areas of public policy theory and practice.
Resumo:
Exogenous prostacyclin is effective in reducing pulmonary vascular resistance in some forms of human pulmonary hypertension (PH). To explore whether endogenous prostaglandins played a similar role in pulmonary hypertension, we examined the effect of deleting cyclooxygenase (COX)-gene isoforms in a chronic hypoxia model of PH. Pulmonary hypertension, examined by direct measurement of right ventricular end systolic pressure (RVESP), right ventricular hypertrophy (n = 8), and hematocrit (n = 3), was induced by 3 weeks of hypobarichypoxia in wild-type and COX-knockout (KO) mice. RVESP was increased in wild-type hypoxic mice compared with normoxic controls (24.4 ± 1.4 versus 13.8 ± 1.9 mm Hg; n = 8; p < 0.05). COX-2 KO mice showed a greater increase in RVESP following hypoxia (36.8 ± 2.7 mm Hg; p < 0.05). Urinary thromboxane (TX)B2 excretion increased following hypoxia (44.6 ± 11.1 versus 14.7 ± 1.8 ng/ml; n = 6; p < 0.05), an effect that was exacerbated by COX-2 gene disruption (54.5 ± 10.8 ng/ml; n = 6). In contrast, the increase in 6-keto-prostacyclin1α excretion following hypoxia was reduced by COX-2 gene disruption (29 ± 3 versus 52 ± 4.6 ng/ml; p < 0.01). Tail cut bleed times were lower following hypoxia, and there was evidence of intravascular thrombosis in lung vessels that was exacerbated by disruption of COX-2 and reduced by deletion of COX-1. The TXA2/endoperoxide receptor antagonist ifetroban (50 mg/kg/day) offset the effect of deleting the COX-2 gene, attenuating the hypoxia-induced rise in RVESP and intravascular thrombosis. COX-2 gene deletion exacerbates pulmonary hypertension, enhances sensitivity to TXA2, and induces intravascular thrombosis in response to hypoxia. The data provide evidence that endogenous prostaglandins modulate the pulmonary response to hypoxia. Copyright © 2008 by The American Society for Pharmacology and Experimental Therapeutics.
Resumo:
Background: A recent study by Dhillon et al. [12], identified both angioinvasion and mTOR as prognostic biomarkers for poor survival in early stage NSCLC. The aim of this study was to verify the above study by examining the angioinvasion and mTOR expression profile in a cohort of early stage NSCLC patients and correlate the results to patient clinico-pathological data and survival. Methods: Angioinvasion was routinely recorded by the pathologist at the initial assessment of the tumor following resection. mTOR was evaluated in 141 early stage (IA-IIB) NSCLC patients (67 - squamous; 60 - adenocarcinoma; 14 - others) using immunohistochemistry (IHC) analysis with an immunohistochemical score (IHS) calculated (% positive cells × staining intensity). Intensity was scored as follows: 0 (negative); 1+ (weak); 2+ (moderate); 3+ (strong). The range of scores was 0-300. Based on the previous study a cut-off score of 30 was used to define positive versus negative patients. The impact of angioinvasion and mTOR expression on prognosis was then evaluated. Results: 101 of the 141 tumors studied expressed mTOR. There was no difference in mTOR expression between squamous cell carcinoma and adenocarcinoma. Angioinvasion (p= 0.024) and mTOR staining (p= 0.048) were significant univariate predictors of poor survival. Both remained significant after multivariate analysis (p= 0.037 and p= 0.020, respectively). Conclusions: Our findings verify angioinvasion and mTOR expression as new biomarkers for poor outcome in patients with early stage NSCLC. mTOR expressing patients may benefit from novel therapies targeting the mTOR survival pathway. © 2011 Elsevier Ireland Ltd.