869 resultados para Roma-Historia-Nerón, 54-68
Resumo:
The main aim of this paper is to outline a proposed program of research which will attempt to quantify the extent of the problem of alcohol and other drugs in the Australian construction industry, and furthermore, develop an appropriate industry-wide policy and cultural change management program and implementation plan to address the problem. This paper will also present preliminary results from the study. The study will use qualitative and quantitative methods (in the form of interviews and surveys, respectively) to evaluate the extent of the problem of alcohol and other drug use in this industry, to ascertain the feasibility of an industry-wide policy and cultural change management program, and to develop an appropriate implementation plan. The study will be undertaken in several construction organisations, at selected sites in South Australia, Victoria and Northern Territory. It is anticipated that approximately 500 employees from the participating organisations across Australia will take part in the study. The World Health Organisation’s Alcohol Use Disorders Identification Test (AUDIT) will be used to measure the extent of alcohol use in the industry. Illicit drug use, ‘‘readiness to change’’, impediments to reducing impairment, feasibility of proposed interventions, and employee attitudes and knowledge regarding workplace AOD impairment, will also be measured through a combination of interviews and surveys. Among the preliminary findings, for 51% (n=127) of respondents, score on the AUDIT indicated alcohol use at hazardous levels. Of the respondents who were using alcohol at hazardous levels, 76% reported (n97) that they do not have a problem with drinking and 54% (n=68) reported that it would be easy to ‘‘cut down’’ or stop drinking. Nearly half (49%) of all respondents (n=122) had used marijuana/cannabis at some time prior to being surveyed. The use of other illicit substances was much less frequently reported. Preliminary interview findings indicated a lack of adequate employee knowledge regarding the physical effects of alcohol and other drugs in the workplace. As for conclusions, the proposed study will address a major gap in the literature with regard to the extent of the problem of alcohol and other drug use in the construction industry in Australia. The study will also develop and implement a national, evidence-based workplace policy, with the aim of mitigating the deleterious effects of alcohol and other drugs in this industry.
Resumo:
People with Parkinson’s disease (PD) have been reported to be at higher risk of malnutrition than an age-matched population due to PD motor and non-motor symptoms and pharmacotherapy side effects. The prevalence of malnutrition in PD has yet to be well-defined. Community-dwelling people with PD, aged > 18 years, were recruited (n = 97, 61 M, 36 F). The Patient-Generated Subjective Global Assessment (PGSGA) was used to assess nutritional status, the Parkinson’s Disease Questionnaire (PDQ-39) was used to assess quality of life, and the Beck’s Depression Inventory (BDI) was used to measure depression. Levodopa equivalent doses (LEDs) were calculated based on reported Parkinson’s disease medication. Weight, height, mid-arm circumference (MAC) and calf circumference were measured. Cognitive function was measured using the Addenbrooke’s Cognitive Examination. Average age was 70.0 (9.1, 35–92) years. Based on SGA, 16 (16.5%) were moderately malnourished (SGA B) while none were severely malnourished (SGA C). The well-nourished participants (SGA A) had a better quality of life, t(90) = −2.28, p < 0.05, and reported less depressive symptoms, t(94)= −2.68, p < 0.05 than malnourished participants. Age, years since diagnosis, cognitive function and LEDs did not signifi cantly differ between the groups. The well-nourished participants had lower PG-SGA scores, t(95) = −5.66, p = 0.00, higher BMIs, t(95) = 3.44, p < 0.05, larger MACs, t(95) = 3.54, p < 0.05 and larger calf circumferences, t(95) = 2.29, p < 0.05 than malnourished participants. Prevalence of malnutrition in community-dwelling adults with PD in this study is comparable to that in other studies with community-dwelling adults without PD and is higher than other PD studies where a nutritional status assessment tool was used. Further research is required to understand the primary risk factors for malnutrition in this group.
Resumo:
A year ago, I became aware of the historical existence of the group CERFI— Le centre d’etudes, de recherches, et de formation institutionelles, or The Study Center for Institutional Research and Formation. CERFI emerged in 1967 under the hand of Lacanian psychiatrist and Trotskyite activist Félix Guattari, whose antonymous journal Recherches chronicled the group’s subversive experiences, experiments, and government-sponsored urban projects. It was a singularly bizarre meeting of the French bureaucracy with militant activist groups, the French intelligentsia, and architectural and planning practitioners at the close of the ‘60s. Nevertheless, CERFI’s analysis of the problems of society was undertaken precisely from the perspective of the state, and the Institute acknowledged a “deep complicity between the intellectual and statesman ... because the first critics of the State, are officials themselves!”1 CERFI developed out of FGERI (The Federation of Groups for Institutional Study and Research), started by Guattari two years earlier. While FGERI was created for the analysis of mental institutions stemming from Guattari’s work at La Borde, an experimental psychiatric clinic, CERFI marks the group’s shift toward urbanism—to the interrogation of the city itself. Not only a platform for radical debate on architecture and the city, CERFI was a direct agent in the development of urban planning schemata for new towns in France. 2 CERFI’s founding members were Guattari, the economist and urban theorist François Fourquet, feminist philosopher Liane Mozère, and urban planner and editor of Multitides Anne Querrien—Guattari’s close friend and collaborator. The architects Antoine Grumback, Alain Fabre, Macary, and Janine Joutel were also members, as well as urbanists Bruno Fortier, Rainier Hoddé, and Christian de Portzamparc. 3 CERFI was the quintessential social project of post-‘68 French urbanism. Located on the Far Left and openly opposed to the Communist Party, this Trotskyist cooperative was able to achieve what other institutions, according to Fourquet, with their “customary devices—the politburo, central committee, and the basic cells—had failed to do.”4 The decentralized institute recognized that any formal integration of the group was to “sign its own death warrant; so it embraced a skein of directors, entangled, forming knots, liquidating all at once, and spinning in an unknown direction, stopping short and returning back to another node.” Allergic to the very idea of “party,” CERFI was a creative project of free, hybrid-aesthetic blocs talking and acting together, whose goal was none other than the “transformation of the libidinal economy of the militant revolutionary.” The group believed that by recognizing and affirming a “group unconscious,” as well as their individual unconscious desires, they would be able to avoid the political stalemates and splinter groups of the traditional Left. CERFI thus situated itself “on the side of psychosis”—its confessed goal was to serve rather than repress the utter madness of the urban malaise, because it was only from this mad perspective on the ground that a properly social discourse on the city could be forged.
Resumo:
Rationale: The Australasian Nutrition Care Day Survey (ANCDS) evaluated if malnutrition and decreased food intake are independent risk factors for negative outcomes in hospitalised patients. Methods: A multicentre (56 hospitals) cross-sectional survey was conducted in two phases. Phase 1 evaluated nutritional status (defined by Subjective Global Assessment) and 24-hour food intake recorded as 0, 25, 50, 75, and 100% intake. Phase 2 data, which included length of stay (LOS), readmissions and mortality, were collected 90 days post-Phase 1. Logistic regression was used to control for confounders: age, gender, disease type and severity (using Patient Clinical Complexity Level scores). Results: Of 3122 participants (53% males, mean age: 65±18 years) 32% were malnourished and 23% consumed�25% of the offered food. Median LOS for malnourished (MN) patients was higher than well-nourished (WN) patients (15 vs. 10 days, p<0.0001). Median LOS for patients consuming �25% of the food was higher than those consuming �50% (13 vs. 11 days, p<0.0001). MN patients had higher readmission rates (36% vs. 30%, p = 0.001). The odds ratios of 90-day in-hospital mortality were 1.8 times greater for MN patients (CI: 1.03 3.22, p = 0.04) and 2.7 times greater for those consuming �25% of the offered food (CI: 1.54 4.68, p = 0.001). Conclusion: The ANCDS demonstrates that malnutrition and/or decreased food intake are associated with longer LOS and readmissions. The survey also establishes that malnutrition and decreased food intake are independent risk factors for in-hospital mortality in acute care patients; and highlights the need for appropriate nutritional screening and support during hospitalisation. Disclosure of Interest: None Declared.
Resumo:
PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
In Australia and internationally, there is scant information about Indigenous repeat drink drivers. The aim was to identify the risk factors associated with repeat offending. De-identified data on drink driving convictions by offenders identifying as Indigenous in Queensland between 2006 and 2010 were examined. A range of univariate analyses were used to compare first time and repeat offenders on gender, age, court location and region (based on the accessibility/remoteness index of Australia), blood alcohol concentration and sentencing severity. Multivariate logistic regression adjusted for confounding variables. Convictions for repeat offenders were more likely from locations other than ‘major cities’ with the association strongest for courts in the ‘very remote’ region (OR=2.75, 2.06-3.76, p<.001). Indigenous offenders 40 years or older were found to be at reduced risk in comparison to offenders aged 15-24 years (OR=0.68, 0.54-0.86, p=0.01). After controlling for confounding factors, gender, sentencing severity and blood alcohol concentration levels were not significantly associated with recidivism. The association of recidivism and remoteness is consistent with higher rates of alcohol-related transport accidents involving Indigenous Australians in isolated areas. This study provides a platform for future research and allows for early attempts to address the need for intervention to reduce Indigenous drink driving recidivism.
Resumo:
Concealed texting (CT) while driving involves a conscious effort to hide one’s texting while obvious texting (OT) does not involve such efforts to conceal the behaviour. Young drivers are the most frequent users of mobile phones while driving which is associated with heightened crash risk. This study investigated the extent to which CT and OT may be discrete behaviours to ascertain whether countermeasures would need to utilise distinct approaches. An extended Theory of Planned Behaviour (TPB) including moral norm, mobile phone involvement, and anticipated regret guided the research. Participants (n = 171) were aged 17 to 25 years, owned a mobile phone, had a current driver’s licence, and resided in Queensland. A repeated measures MANOVA found significant differences between CT and OT on all standard and extended TPB constructs. Hierarchical multiple regression analyses showed the standard TPB constructs accounted for 68.7% and 54.6% of the variance in intentions to engage in CT and OT, respectively. The extended predictors contributed additional variance in intentions over and above the standard TPB constructs. Further, in the final regression model, differences emerged in the significant predictors of each type of texting. These findings provide initial evidence that CT and OT are distinct behaviours. This distinction is important to the extent that it may influence the nature of advertising countermeasures aimed at reducing/preventing young drivers’ engagement in these risky behaviours.
Resumo:
BACKGROUND: In single-group studies, chromosomal rearrangements of the anaplastic lymphoma kinase gene (ALK ) have been associated with marked clinical responses to crizotinib, an oral tyrosine kinase inhibitor targeting ALK. Whether crizotinib is superior to standard chemotherapy with respect to efficacy is unknown. METHODS: We conducted a phase 3, open-label trial comparing crizotinib with chemotherapy in 347 patients with locally advanced or metastatic ALK-positive lung cancer who had received one prior platinum-based regimen. Patients were randomly assigned to receive oral treatment with crizotinib (250 mg) twice daily or intravenous chemotherapy with either pemetrexed (500 mg per square meter of body-surface area) or docetaxel (75 mg per square meter) every 3 weeks. Patients in the chemotherapy group who had disease progression were permitted to cross over to crizotinib as part of a separate study. The primary end point was progression-free survival. RESULTS: The median progression-free survival was 7.7 months in the crizotinib group and 3.0 months in the chemotherapy group (hazard ratio for progression or death with crizotinib, 0.49; 95% confidence interval [CI], 0.37 to 0.64; P<0.001). The response rates were 65% (95% CI, 58 to 72) with crizotinib, as compared with 20% (95% CI, 14 to 26) with chemotherapy (P<0.001). An interim analysis of overall survival showed no significant improvement with crizotinib as compared with chemotherapy (hazard ratio for death in the crizotinib group, 1.02; 95% CI, 0.68 to 1.54; P=0.54). Common adverse events associated with crizotinib were visual disorder, gastrointestinal side effects, and elevated liver aminotransferase levels, whereas common adverse events with chemotherapy were fatigue, alopecia, and dyspnea. Patients reported greater reductions in symptoms of lung cancer and greater improvement in global quality of life with crizotinib than with chemotherapy. CONCLUSIONS: Crizotinib is superior to standard chemotherapy in patients with previously treated, advanced non-small-cell lung cancer with ALK rearrangement. (Funded by Pfizer; ClinicalTrials.gov number, NCT00932893.) Copyright © 2013 Massachusetts Medical Society.
Resumo:
Objective: To document change in prevalence of obesity, diabetes and other cardiovascular diease (CVD) risk factors, and trends in dietary macronutrient intake, over an eight-year period in a rural Aboriginal community in central Australia. Design: Sequential cross-sectional community surveys in 1987, 1991 and 1995. Subjects: All adults (15 years and over) in the community were invited to participate. In 1987, 1991 and 1995, 335 (87% of eligible adults), 331 (76%) and 304 (68%), respectively, were surveyed. Main outcome measures: Body mass index and waist : hip ratio; blood glucose level and glucose tolerance; fasting total and high density lipoprotein (HDL) cholesterol and triglyceride levels; and apparent dietary intake (estimated by the store turnover method). Intervention: A community-based nutrition awareness and healthy lifestyle program, 1988-1990. Results: At the eight-year follow-up, the odds ratios (95% CIs) for CVD risk factors relative to baseline were obesity, 1.84 (1.28-2.66); diabetes, 1.83 (1.11-3.03); hypercholesterolaemia, 0.29 (0.20-0.42); and dyslipidaemia (high triglyceride plus low HDL cholesterol level), 4.54 (2.84-7.29). In younger women (15-24 years), there was a trebling in obesity prevalence and a four- to fivefold increase in diabetes prevalence. Store turnover data suggested a relative reduction in the consumption of refined carbohydrates and saturated fats. Conclusion: Interventions targeting nutritional factors alone are unlikely to greatly alter trends towards increasing prevalences of obesity and diabetes. In communities where healthy food choices are limited, the role of regular physical activity in improving metabolic fitness may also need to be emphasised.
Resumo:
The nucleotide sequences of genome segments S7 and S10 of a Thai-isolate of rice ragged stunt virus (RRSV) were determined. The 1938 bp S7 sequence contains a single large open reading frame (ORF) spanning nucleotides 20 to 1 843 that is predicted to encode a protein of M(r) 68 025. The 1 162 bp S10 sequence has a major ORF spanning nucleotides 142 to 1 032 that is predicted to encode a protein of M(r) 32364. This S10 ORF is preceded by a small ORF (nt 20-55) which is probably a minicistron. Coupled in vitro transcription-translation from the two major ORFs gave protein products of the expected sizes. However, no protein was visualised from S10 when the small ORF sequence was included. Proteins were expressed in Escherichia coli from the full length ORF of S7 (P7) and from a segment of the S10 ORF (P10) fused to the ORF of glutathione S-transferase (GST). Neither fusion protein was recognised by polyclonal antibodies raised against RRSV particles. Furthermore, polyclonal antibodies raised against GST-P7 fusion protein did not recognise any virion structural polypeptides. These data strongly suggest that the proteins P7 and P10 do not form part of RRSV particle. This is further supported by observed sequence homology (though very weak) of predicted.
Resumo:
Objectives This paper reports on the preferred learning styles of Registered Nurses practicing in acute care environments and relationships between gender, age, post-graduate experience and the identified preferred learning styles. Methods A prospective cohort study design was used. Participants completed a demographic questionnaire and the Felder-Silverman Index of Learning Styles (ILS) questionnaire to determine preferred learning styles. Results Most of the Registered Nurse participants were balanced across the Active-Reflective (n = 77, 54%), and Sequential-Global (n = 96, 68%) scales. Across the other scales, sensing (n = 97, 68%) and visual (n = 76, 53%) were the most common preferred learning style. There were only a small proportion who had a preferred learning style of reflective (n = 21, 15%), intuitive (n = 5, 4%), verbal (n = 11, 8%) or global learning (n = 15, 11%). Results indicated that gender, age and years since undergraduate education were not related to the identified preferred learning styles. Conclusions The identification of Registered Nurses’ learning style provides information that nurse educators and others can use to make informed choices about modification, development and strengthening of professional hospital-based educational programs. The use of the Index of Learning Styles questionnaire and its ability to identify ‘balanced’ learning style preferences may potentially yield additional preferred learning style information for other health-related disciplines.
Resumo:
There are limited studies that describe patient meal preferences in hospital; however this data is critical to develop menus that address satisfaction and nutrition whilst balancing resources. This quality study aimed to determine preferences for meals and snacks to inform a comprehensive menu revision in a large (929 bed) tertiary public hospital. The method was based on Vivanti et al. (2008) with data collected by two final year dietetic students. The first survey comprised 72 questions, achieved a response rate of 68% (n = 192), with the second more focused at 47 questions achieving a higher response rate of 93% (n = 212). Findings showed over half the patients reporting poor or less than normal appetite, 20% describing taste issues, over a third with a LOS >7 days, a third with a MST _ 2 and less than half eating only from the general menu. Soup then toast was most frequently reported as eaten at home when unwell, and whilst most reported not missing any foods when in hospital (25%), steak was most commonly missed. Hot breakfasts were desired by the majority (63%), with over half preferring toast (even if cold). In relation to snacks, nearly half (48%) wanted something more substantial than tea/coffee/biscuits, with sandwiches (54%) and soup (33%) being suggested. Sandwiches at the evening meal were not popular (6%). Difficulties with using cutlery and meal size selection were identified as issues. Findings from this study had high utility and supported a collaborative and evidenced based approach to a successful major menu change for the hospital.
Resumo:
Background Lumbar Epidural Steroids Injections (ESI’s) have previously been shown to provide some degree of pain relief in sciatica. Number Needed To Treat (NNT) to achieve 50% pain relief has been estimated at 7 from the results of randomised controlled trials. Pain relief is temporary. They remain one of the most commonly provided procedures in the UK. It is unknown whether this pain relief represents good value for money. Methods 228 patients were randomised into a multi-centre Double Blind Randomised Controlled Trial. Subjects received up to 3 ESI’s or intra-spinous saline depending on response and fall off with the first injection. All other treatments were permitted. All received a review of analgesia, education and physical therapy. Quality of life was assessed using the SF36 at 6 points and compared using independent sample t-tests. Follow up was up to 1 yr. Missing data was imputed using last observation carried forward (LOCF). QALY’s (Quality of Life Years) were derived from preference based heath values (summary health utility score). SF-6D health state classification was derived from SF-36 raw score data. Standard gambles (SG) were calculated using Model 10. SG scores were calculated on trial results. LOCF was not used for this. Instead average SG were derived for a subset of patients with observations for all visits up to week 12. Incremental QALY’s were derived as the difference in the area between the SG curve for the active group and placebo group. Results SF36 domains showed a significant improvement in pain at week 3 but this was not sustained (mean 54 Active vs 61 Placebo P<0.05). Other domains did not show any significant gains compared with placebo. For derivation of SG the number in the sample in each period differed. In week 12, average SG scores for active and placebo converged. In other words, the health gain for the active group as measured by SG was achieved by the placebo group by week 12. The incremental QALY gained for a patient under the trial protocol compared with the standard care package was 0.0059350. This is equivalent to an additional 2.2 days of full health. The cost per QALY gained to the provider from a patient management strategy administering one epidural as suggested by results was £25 745.68. This result was derived assuming that the gain in QALY data calculated for patients under the trial protocol would approximate that under a patient management strategy based on the trial results (one ESI). This is above the threshold suggested by some as a cost effective treatment. Conclusions The transient benefit in pain relief afforded by ESI’s does not appear to be cost-effective. Further work is needed to develop more cost-effective conservative treatments for sciatica.
Resumo:
The absence of comparative validity studies has prevented researchers from reaching consensus regarding the application of intensity-related accelerometer cut points for children and adolescents. PURPOSE This study aimed to evaluate the classification accuracy of five sets of independently developed ActiGraph cut points using energy expenditure, measured by indirect calorimetry, as a criterion reference standard. METHODS A total of 206 participants between the ages of 5 and 15 yr completed 12 standardized activity trials. Trials consisted of sedentary activities (lying down, writing, computer game), lifestyle activities (sweeping, laundry, throw and catch, aerobics, basketball), and ambulatory activities (comfortable walk, brisk walk, brisk treadmill walk, running). During each trial, participants wore an ActiGraph GT1M, and VO 2 was measured breath-by-breath using the Oxycon Mobile portable metabolic system. Physical activity intensity was estimated using five independently developed cut points: Freedson/Trost (FT), Puyau (PU), Treuth (TR), Mattocks (MT), and Evenson (EV). Classification accuracy was evaluated via weighted κ statistics and area under the receiver operating characteristic curve (ROC-AUC). RESULTS Across all four intensity levels, the EV (κ = 0.68) and FT (κ = 0.66) cut points exhibited significantly better agreement than TR (κ = 0.62), MT (κ = 0.54), and PU (κ = 0.36). The EV and FT cut points exhibited significantly better classification accuracy for moderate-to vigorous-intensity physical activity (ROC-AUC = 0.90) than TR, PU, or MT cut points (ROC-AUC = 0.77-0.85). Only the EV cut points provided acceptable classification accuracy for all four levels of physical activity intensity and performed well among children of all ages. The widely applied sedentary cut point of 100 counts per minute exhibited excellent classification accuracy (ROC-AUC = 0.90). CONCLUSIONS On the basis of these findings, we recommend that researchers use the EV ActiGraph cut points to estimate time spent in sedentary, light-, moderate-, and vigorous-intensity activity in children and adolescents. Copyright © 2011 by the American College of Sports Medicine.