456 resultados para Nutrition-associated Complications
Resumo:
Background: Intra-amniotic infection accounts for 30% of all preterm births (PTB), with the human Ureaplasma species being the most frequently identified microorganism from the placentas of women who deliver preterm. The highest prevalence of PTB occurs late preterm (32-36 weeks) but no studies have investigated the role of infectious aetiologies associated with late preterm birth. Method: Placentas from women with late PTB were dissected aseptically and samples of chorioamnion tissue and membrane swabs were collected. These were tested for Ureaplasma spp. and aerobic/anaerobic bacteria by culture and real-time PCR. Western blot was used to assess MBA variation in ureaplasma clinical isolates. The presence of microorganisms was correlated with histological chorioamnionitis. Results: Ureaplasma spp. were isolated from 33/466 (7%) of placentas by culture or PCR. The presence of ureaplasmas, but not other microorganisms, was associated with histological chorioamnionitis (21/33 ureaplasma-positive vs. 8/42 other bacteria; p= 0.001). Ureaplasma clinical isolates demonstrating no MBA variation were associated with histological chorioamnionitis. By contrast, ureaplasmas displaying MBA variation were isolated from placentas with no significant histological chorioamnionitis (p= 0.001). Conclusion: Ureaplasma spp. within placentas delivered late preterm (7%) is associated with histological chorioamnionitis (p = 0.001). Decreased inflammation within chorioamnion was observed when the clinical ureaplasma isolates demonstrated variation of their surface-exposed lipoproteins (MBA). This variation may be a mechanism by which ureaplasmas modulate and evade the host immune response. So whilst ureaplasmas are present intra-amniotically they are not suspected because of the normal macroscopic appearance of the placentas and the amniotic fluid.
Resumo:
Background: Optimal adherence to antiretroviral therapy (ART) is necessary for people living with HIV/AIDS (PLHIV). There have been relatively few systematic analyses of factors that promote or inhibit adherence to antiretroviral therapy among PLHIV in Asia. This study assessed ART adherence and examined factors associated with suboptimal adherence in northern Viet Nam. Methods: Data from 615 PLHIV on ART in two urban and three rural outpatient clinics were collected by medical record extraction and from patient interviews using audio computer-assisted self-interview (ACASI). Results: The prevalence of suboptimal adherence was estimated to be 24.9% via a visual analogue scale (VAS) of past-month dose-missing and 29.1% using a modified Adult AIDS Clinical Trial Group scale for on-time dose-taking in the past 4 days. Factors significantly associated with the more conservative VAS score were: depression (p < 0.001), side-effect experiences (p < 0.001), heavy alcohol use (p = 0.001), chance health locus of control (p = 0.003), low perceived quality of information from care providers (p = 0.04) and low social connectedness (p = 0.03). Illicit drug use alone was not significantly associated with suboptimal adherence, but interacted with heavy alcohol use to reduce adherence (p < 0.001). Conclusions: This is the largest survey of ART adherence yet reported from Asia and the first in a developing country to use the ACASI method in this context. The evidence strongly indicates that ART services in Viet Nam should include screening and treatment for depression, linkage with alcohol and/or drug dependence treatment, and counselling to address the belief that chance or luck determines health outcomes.
Resumo:
It is hypothesized that increased plasma or serum concentrations of extracellular heat shock proteins (eHSP) serve as a danger signal to the innate immune system. Cellular binding of eHSP leads to activation of NK cells and monocytes, as measured by their increased cytokine production, mitotic division and killing capacity. We examined whether eHSP binds to NK lymphocytes in vivo in athletes performing endurance exercise in the heat. Eighteen trained male runners ran at 70% VO2max at 35 degrees C and 40% relative humidity. Venous blood collected before, after and 1.5 h after exercise was analysed for leukocyte distribution, phenotype and eHSP70. NK cell-enriched samples were examined for co-localization of CD94 and eHSP70 expression. Plasma eHSP-70 concentration was measured by ELISA. Subjects ran for approximately 50 min, which elicited a reversible leukocytosis. NK cell count increased 83% (p < 0.01) immediately after exercise, then decreased to 66% of the resting level 1.5 h after exercise (p < 0.05). Plasma eHSP concentration increased 167% after exercise and remained elevated (by up to 71%) 1.5 h after exercise (p < 0.01). eHSP was expressed on both NK cells and monocytes at all times; the count of NK cells positive for eHSP doubled from 0.04 +/- 0.02 10(9)/L (mean +/- SD) to 0.08 +/- 0.06 10(9)/L after exercise. In summary, exercise in the heat increased free plasma eHSP concentration, and the eHSP co-localized with CD94 on NK cells. These data confirm the link between exercise and activation of the innate immune system.
Resumo:
Ascorbic acid or vitamin C is involved in a number of biochemical pathways that are important to exercise metabolism and the health of exercising individuals. This review reports the results of studies investigating the requirement for vitamin C with exercise on the basis of dietary vitamin C intakes, the response to supplementation and alterations in plasma, serum, and leukocyte ascorbic acid concentration following both acute exercise and regular training. The possible physiological significance of changes in ascorbic acid with exercise is also addressed. Exercise generally causes a transient increase in circulating ascorbic acid in the hours following exercise, but a decline below pre-exercise levels occurs in the days after prolonged exercise. These changes could be associated with increased exercise-induced oxidative stress. On the basis of alterations in the concentration of ascorbic acid within the blood, it remains unclear if regular exercise increases the metabolism of vitamin C. However, the similar dietary intakes and responses to supplementation between athletes and nonathletes suggest that regular exercise does not increase the requirement for vitamin C in athletes. Two novel hypotheses are put forward to explain recent findings of attenuated levels of cortisol postexercise following supplementation with high doses of vitamin C.
Resumo:
Monitoring foodservice satisfaction is a risk management strategy for malnutrition in the acute care sector, as low satisfaction may be associated with poor intake. This study aimed to investigate the relationship between age and foodservice satisfaction in the private acute care setting. Patient satisfaction was assessed using a validated tool, the Acute Care Hospital Foodservice Patient Satisfaction Questionnaire for data collected 2008–2010 (n = 779) at a private hospital, Brisbane. Age was grouped into three categories; <50 years, 51–70 years and >70 years. Fisher’s exact test assessed independence of categorical responses and age group; ANOVA or Kruskal–Wallis test was used for continuous variables. Dichotomised responses were analysed using logistic regression and odds ratios (95% confidence interval, p < 0.05). Overall foodservice satisfaction (5 point scale) was high (≥4 out of 5) and was independent of age group (p = 0.377). There was an increasing trend with age in mean satisfaction scores for individual dimensions of foodservice; food quality (p < 0.001), meal service quality (p < 0.001), staff service issues (p < 0.001) and physical environment (p < 0.001). A preference for being able to choose different sized meals (59.8% > 70 years vs 40.6% ≤50 years; p < 0.001) and response to ‘the foods are just the right temperature’ (55.3% >70 years vs 35.9% ≤50 years; p < 0.001) was dependent on age. For the food quality dimension, based on dichotomised responses (satisfied or not), the odds of satisfaction was higher for >70 years (OR = 5.0, 95% CI: 1.8–13.8; <50 years referent). These results suggest that dimensions of foodservice satisfaction are associated with age and can assist foodservices to meet varying generational expectations of clients.
Resumo:
Introduction: Participants may respond to phases of a workplace walking program at different rates. This study evaluated the factors that contribute to the number of steps through phases of the program. The intervention was automated through a web-based program designed to increase workday walking. Methods: The study reviewed independent variable influences throughout phases I–III. A convenience sample of university workers (n=56; 43.6±1.7 years; BMI 27.44±.2.15 kg/m2; 48 female) were recruited at worksites in Australia. These workers were given a pedometer (Yamax SW 200) and access to the website program. For analyses, step counts entered by workers into the website were downloaded and mean workday steps were compared using a seemingly unrelated regression. This model was employed to capture the contemporaneous correlation within individuals in the study across observed time periods. Results: The model predicts that the 36 subjects with complete information took an average 7460 steps in the baseline two week period. After phase I, statistically significance increases in steps (from baseline) were explained by age, working status (full or part time), occupation (academic or professional), and self reported public transport (PT) use (marginally significant). Full time workers walked more than part time workers by about 440 steps, professionals walked about 300 steps more than academics, and PT users walked about 400 steps more than non-PT users. The ability to differentiate steps after two weeks among participants suggests a differential affect of the program after only two weeks. On average participants increased steps from week two to four by about 525 steps, but regular auto users had nearly 750 steps less than non-auto users at week four. The effect of age was diminished in the 4th week of observation and accounted for 34 steps per year of age. In phase III, discriminating between participants became more difficult, with only age effects differentiating their increase over baseline. The marginal effect of age by phase III compared to phase I, increased from 36 to 50, suggesting a 14 step per year increase from the 2nd to 6th week. Discussion: The findings suggest that participants responded to the program at different rates, with uniformity of effect achieved by the 6th week. Participants increased steps, however a tapering off occurred over time. Age played the most consistent role in predicting steps over the program. PT use was associated with increased step counts, while Auto use was associated with decreased step counts.
Resumo:
Objective: Several new types of contraception became available in Australia over the last twelve years (the implant in 2001, progestogen intra-uterine device (IUD) in 2003, and vaginal contraceptive ring in 2007). Most methods of contraception require access to health services. Permanent sterilisation and the insertion of an implant or IUD involve a surgical procedure. Access to health professionals providing these specialised services may be more difficult in rural areas. This paper examines uptake of permanent or long-acting reversible contraception (LARCs) among Australian women in rural areas compared to women in urban areas. Method: Participants in the Australian Longitudinal Study on Women's Health born in 1973-78 reported on their contraceptive use at three surveys: 2003, 2006 and 2009. Contraceptive methods included permanent sterilisation (tubal ligation, vasectomy), non-daily or LARC methods (implant, IUD, injection, vaginal ring), and other methods including daily, barrier or "natural" methods (oral contraceptive pills, condoms, withdrawal, safe period). Sociodemographic, reproductive history and health service use factors associated with using permanent, LARC or other methods were examined using a multivariable logistic regression analysis. Results: Of 9,081 women aged 25-30 in 2003, 3% used permanent methods and 4% used LARCs. Six years later in 2009, of 8,200 women (aged 31-36), 11% used permanent methods and 9% used LARCs. The fully adjusted parsimonious regression model showed that the likelihood of a woman using LARCs and permanent methods increased with number of children. Women whose youngest child was school-age were more likely to use LARCs (OR=1.83, 95%CI 1.43-2.33) or permanent methods (OR=4.39, 95%CI 3.54-5.46) compared to women with pre-school children. Compared to women living in major cities, women in inner regional areas were more likely to use LARCs (OR=1.26, 95%CI 1.03-1.55) or permanent methods (OR=1.43, 95%CI 1.17-1.76). Women living in outer regional and remote areas were more likely than women living in cities to use LARCs (OR=1.65, 95%CI 1.31-2.08) or permanent methods (OR=1.69, 95%CI 1.43-2.14). Women with poorer access to GPs were more likely to use permanent methods (OR=1.27, 95%CI 1.07-1.52). Conclusions: Location of residence and access to health services are important factors in women's choices about long-acting contraception in addition to the number and age of their children. There is a low level of uptake of non-daily, long-acting methods of contraception among Australian women in their mid-thirties.
Resumo:
Previous research employing indirect measures of arch structure, such as those derived from footprints, have indicated that obesity results in a “flatter” foot type. In the absence of radiographic measures, however, definitive conclusions regarding the osseous alignment of the foot cannot be made. We determined the effect of body mass index (BMI) on radiographic and footprint‐based measures of arch structure. The research was a cross‐sectional study in which radiographic and footprint‐based measures of foot structure were made in 30 subjects (10 males, 20 female) in addition to standard anthropometric measures of height, weight, and BMI. Multiple (univariate) regression analysis demonstrated that both BMI ( β = 0.39, t 26 = 2.12, p = 0.04) and radiographic arch alignment ( β = 0.51, t 26 = 3.32, p < 0.01) were significant predictors of footprint‐based measures of arch height after controlling for all variables in the model ( R 2 = 0.59, F 3,26 = 12.3, p < 0.01). In contrast, radiographic arch alignment was not significantly associated with BMI ( β = −0.03, t 26 = −0.13, p = 0.89) when Arch Index and age were held constant ( R 2 = 0.52, F 3,26 = 9.3, p < 0.01). Adult obesity does not influence osseous alignment of the medial longitudinal arch, but selectively distorts footprint‐based measures of arch structure. Footprint‐based measures of arch structure should be interpreted with caution when comparing groups of varying body composition.
Resumo:
Introduction: There is a recognised relationship between dry weather conditions and increased risk of anterior cruciate ligament (ACL) injury. Previous studies have identified 28 day evaporation as an important weather-based predictor of non-contact ACL injuries in professional Australian Football League matches. The mechanism of non-contact injury to the ACL is believed to increased traction and impact forces between footwear and playing surface. Ground hardness and the amount and quality of grass are factors that would most likely influence this and are inturn, related to the soil moisture content and prevailing weather conditions. This paper explores the relationship between soil moisture content, preceding weather conditions and the Clegg Soil Impact Test (CSIT) which is an internationally recognised standard measure of ground hardness for sports fields. Methodology: The 2.25 kg Clegg Soil Impact Test and a pair of 12 cm soil moisture probes were used to measure ground hardness and percentage moisture content. Five football fields were surveyed at 13 prescribed sites just before seven football matches from October 2008 to January 2009 (an FC Women’s WLeague team). Weather conditions recorded at the nearest weather station were obtained from the Bureau of Meteorology website and total rainfall less evaporation was calculated for 7 and 28 days prior to each match. All non-contact injuries occurring during match play and their location on the field were recorded. Results/conclusions: Ground hardness varied between CSIT 5 and 17 (x10G) (8 is considered a good value for sports fields). Variations within fields were typically greatest in the centre and goal areas. Soil moisture ranged from 3 to 40% with some fields requiring twice the moisture content of others to maintain similar CSIT values. There was a non-linear, negative relationship for ground hardness versus moisture content and a linear relationship with weather (R2, of 0.30 and 0.34, respectively). Three non-contact ACL injuries occurred during the season. Two of these were associated with hard and variable ground conditions.
Resumo:
Introduction: The human patellar tendon is highly adaptive to changes in habitual loading but little is known about its acute mechanical response to exercise. This research evaluated the immediate transverse strain response of the patellar tendon to a bout of resistive quadriceps exercise. Methods: Twelve healthy adult males (mean age 34.0+/-12.1 years, height 1.75+/-0.09 m and weight 76.7+/-12.3 kg) free of knee pain participated in the research. A 10-5 MHz linear-array transducer was used to acquire standardised sagittal sonograms of the right patellar tendon immediately prior to and following 90 repetitions of a double-leg parallel-squat exercise performed against a resistance of 175% bodyweight. Tendon thickness was determined 20-mm distal to the pole of the patellar and transverse Hencky strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness and expressed as a percentage. Measures of tendon echotexture (echogenicity and entropy) were also calculated from subsequent gray-scale profiles. Results: Quadriceps exercise resulted in an immediate decrease in patellar tendon thickness (P<.05), equating to a transverse strain of -22.5+/-3.4%, and was accompanied by increased tendon echogenicity (P<.05) and decreased entropy (P<.05). The transverse strain response of the patellar tendon was significantly correlated with both tendon echogenicity (r = -0.58, P<.05) and entropy following exercise (r=0.73, P<.05), while older age was associated with greater entropy of the patellar tendon prior to exercise (r=0.79, P<.05) and a reduced transverse strain response (r=0.61, P<.05) following exercise. Conclusions: This study is the first to show that quadriceps exercise invokes structural alignment and fluid movement within the matrix that are manifest by changes in echotexture and transverse strain in the patellar tendon., (C)2012The American College of Sports Medicine
Resumo:
This research evaluated the effect of obesity on the acute cumulative transverse strain of the Achilles tendon in response to exercise. Twenty healthy adult males were categorized into ‘low normal-weight’ (BMI <23 kg m−2) and ‘overweight’ (BMI >27.5 kg m−2) groups based on intermediate cut-off points recommended by the World Health Organization. Longitudinal sonograms of the right Achilles tendon were acquired immediately prior and following weight-bearing ankle exercises. Achilles tendon thickness was measured 20-mm proximal to the calcaneal insertion and transverse tendon strain was calculated as the natural log of the ratio of post- to pre-exercise tendon thickness. The Achilles tendon was thicker in the overweight group both prior to (t18 = −2.91, P = 0.009) and following (t18 = −4.87, P < 0.001) exercise. The acute transverse strain response of the Achilles tendon in the overweight group (−10.7 ± 2.5%), however, was almost half that of the ‘low normal-weight’ (−19.5 ± 7.4%) group (t18 = −3.56, P = 0.004). These findings suggest that obesity is associated with structural changes in tendon that impairs intra-tendinous fluid movement in response to load and provides new insights into the link between tendon pathology and overweight and obesity.
Resumo:
Background: Malnutrition before and during chemotherapy is associated with poor treatment outcomes. The risk of cancer-related malnutrition is exacerbated by common nutrition impact symptoms during chemotherapy, such as nausea, diarrhoea and mucositis. Aim of presentation: To describe the prevalence of malnutrition/ malnutrition risk in two samples of patients treated in a quaternary-level chemotherapy unit. Research design: Cross sectional survey. Sample 1: Patients ≥ 65 years prior to chemotherapy treatment (n=175). Instrument: Nurse-administered Malnutrition Screening Tool to screen for malnutrition risk and body mass index (BMI). Sample 2: Patients ≥ 18 years receiving chemotherapy (n=121). Instrument: Dietitian-administered Patient Generated Subjective Global Assessment to assess malnutrition, malnutrition risk and BMI. Findings Sample 1: 93/175 (53%) of older patients were at risk of malnutrition prior to chemotherapy. 27 (15%) were underweight (BMI <21.9); 84 (48%) were overweight (BMI >27). Findings Sample 2: 31/121 patients (26%) were malnourished; 12 (10%) had intake-limiting nausea or vomiting; 22 (20%) reported significant weight loss; and 20 (18%) required improved nutritional symptom management during treatment. 13 participants with malnutrition/nutrition impact symptoms (35%) had no dietitian contact; the majority of these participants were overweight. Implications for nursing: Patients with, or at risk of, malnutrition before and during chemotherapy can be overlooked, particularly if they are overweight. Older patients seem particularly at risk. Nurses can easily and quickly identify risk with the regular use of the Malnutrition Screening Tool, and refer patients to expert dietetic support, to ensure optimal treatment outcomes.
Resumo:
Objective: Comprehensive, accurate information about road crashes and related trauma is a prerequisite for identification and control of risk factors as well as for identifying faults within the broader road safety system. Quality data and appropriate crash investigation are critical in reducing the road toll that is rapidly growing in much of the developing world, including Pakistan. This qualitative research explored the involvement of social and cultural factors (in particular, fatalism) in risky road use in Pakistan. The findings highlight a significant issue, previously unreported in the road safety literature, namely, the link between fatalistic beliefs and inaccurate reporting of road crashes. Method: Thirty interviews (one-to one) were conducted by the first author with police officers, drivers, policy makers and religious orators in three Pakistani cities. Findings: Evidence emerged of a strong link between fatalism and the under-reporting of road crashes. In many cases, crashes and related road trauma appear to go unreported because a crash is considered to be one’s fate and, therefore, beyond personal control. Fate was also implicated in the practice of reconciliation between parties after a crash without police involvement and the seeking and granting of pardon for a road death. Conclusions: These issues represent additional factors that can contribute to under-reporting of crashes and associated trauma. Together, they highlight complications involved in establishing the true cost of road trauma in a country such as Pakistan and the difficulties faced when attempting to promote scientifically-based road safety information to counteract faith-based beliefs.
Resumo:
In developed countries the relationship between socioeconomic position (SEP) and health is unequivocal. Those who are socioeconomically disadvantaged are known to experience higher morbidity and mortality from a range of chronic diet-related conditions compared to those of higher SEP. Socioeconomic inequalities in diet are well established. Compared to their more advantaged counterparts, those of low SEP are consistently found to consume diets less consistent with dietary guidelines (i.e. higher in fat, salt and sugar and lower in fibre, fruit and vegetables). Although the reasons for dietary inequalities remain unclear, understanding how such differences arise is important for the development of strategies to reduce health inequalities. Both environmental (e.g. proximity of supermarkets, price, and availability of foods) and psychosocial (e.g. taste preference, nutrition knowledge) influences are proposed to account for inequalities in food choices. Although in the United States (US), United Kingdom (UK), and parts of Australia, environmental factors are associated with socioeconomic differences in food choices, these factors do not completely account for the observed inequalities. Internationally, this context has prompted calls for further exploration of the role of psychological and social factors in relation to inequalities in food choices. It is this task that forms the primary goal of this PhD research. In the small body of research examining the contribution of psychosocial factors to inequalities in food choices, studies have focussed on food cost concerns, nutrition knowledge or health concerns. These factors are generally found to be influential. However, since a range of psychosocial factors are known determinants of food choices in the general population, it is likely that a range of factors also contribute to inequalities in food choices. Identification of additional psychosocial factors of relevance to inequalities in food choices would provide new opportunities for health promotion, including the adaption of existing strategies. The methodological features of previous research have also hindered the advancement of knowledge in this area and a lack of qualitative studies has resulted in a dearth of descriptive information on this topic. This PhD investigation extends previous research by assessing a range of psychosocial factors in relation to inequalities in food choices using both quantitative and qualitative techniques. Secondary data analyses were undertaken using data obtained from two Brisbane-based studies, the Brisbane Food Study (N=1003, conducted in 2000), and the Sixty Families Study (N=60, conducted in 1998). Both studies involved main household food purchasers completing an interviewer-administered survey within their own home. Data pertaining to food-purchasing, and psychosocial, socioeconomic and demographic characteristics were collected in each study. The mutual goals of both the qualitative and quantitative phases of this investigation were to assess socioeconomic differences in food purchasing and to identify psychosocial factors relevant to any observed differences. The quantitative methods then additionally considered whether the associations examined differed according to the socioeconomic indicator used (i.e. income or education). The qualitative analyses made a unique contribution to this project by generating detailed descriptions of socioeconomic differences in psychosocial factors. Those with lower levels of income and education were found to make food purchasing choices less consistent with dietary guidelines compared to those of high SEP. The psychosocial factors identified as relevant to food-purchasing inequalities were: taste preferences, health concerns, health beliefs, nutrition knowledge, nutrition concerns, weight concerns, nutrition label use, and several other values and beliefs unique to particular socioeconomic groups. Factors more tenuously or inconsistently related to socioeconomic differences in food purchasing were cost concerns, and perceived adequacy of the family diet. Evidence was displayed in both the quantitative and qualitative analyses to suggest that psychosocial factors contribute to inequalities in food purchasing in a collective manner. The quantitative analyses revealed that considerable overlap in the socioeconomic variation in food purchasing was accounted for by key psychosocial factors of importance, including taste preference, nutrition concerns, nutrition knowledge, and health concerns. Consistent with these findings, the qualitative transcripts demonstrated the interplay between such influential psychosocial factors in determining food-purchasing choices. The qualitative analyses found socioeconomic differences in the prioritisation of psychosocial factors in relation to food choices. This is suggestive of complex cultural factors that distinguish advantaged and disadvantaged groups and result in socioeconomically distinct schemas related to health and food choices. Compared to those of high SEP, those of lower SEP were less likely to indicate that health concerns, nutrition concerns, or food labels influenced food choices, and exhibited lower levels of nutrition knowledge. In the absence of health or nutrition-related concerns, taste preferences tended to dominate the food purchasing choices of those of low SEP. Overall, while cost concerns did not appear to be a main determinant of socioeconomic differences in food purchasing, this factor had a dominant influence on the food choices of some of the most disadvantaged respondents included in this research. The findings of this study have several implications for health promotion. The integrated operation of psychosocial factors on food purchasing inequalities indicates that multiple psychosocial factors may be appropriate to target in health promotion. It also seems possible that the inter-relatedness of psychosocial factors would allow health promotion targeting a single psychosocial factor to have a flow-on affect in terms of altering other influential psychosocial factors. This research also suggests that current mass marketing approaches to health promotion may not be effective across all socioeconomic groups due to differences in the priorities and main factors of influence in food purchasing decisions across groups. In addition to the practical recommendations for health promotion, this investigation, through the critique of previous research, and through the substantive study findings, has highlighted important methodological considerations for future research. Of particular note are the recommendations pertaining to the selection of socioeconomic indicators, measurement of relevant constructs, consideration of confounders, and development of an analytical approach. Addressing inequalities in health has been noted as a main objective by many health authorities and governments internationally. It is envisaged that the substantive and methodological findings of this thesis will make a useful contribution towards this important goal.
Resumo:
Objectives: To identify and appraise the literature concerning nurse-administered procedural sedation and analgesia in the cardiac catheter laboratory. Design and data sources: An integrative review method was chosen for this study. MEDLINE and CINAHL databases as well as The Cochrane Database of Systematic Reviews and the Joanna Briggs Institute were searched. Nineteen research articles and three clinical guidelines were identified. Results: The authors of each study reported nurse-administered sedation in the CCL is safe due to the low incidence of complications. However, a higher percentage of deeply sedated patients were reported to experience complications than moderately sedated patients. To confound this issue, one clinical guideline permits deep sedation without an anaesthetist present, while others recommend against it. All clinical guidelines recommend nurses are educated about sedation concepts. Other findings focus on pain and discomfort and the cost-savings of nurse-administered sedation, which are associated with forgoing anaesthetic services. Conclusions: Practice is varied due to limitations in the evidence and inconsistent clinical practice guidelines. Therefore, recommendations for research and practice have been made. Research topics include determining how and in which circumstances capnography can be used in the CCL, discerning the economic impact of sedation-related complications and developing a set of objectives for nursing education about sedation. For practice, if deep sedation is administered without an anaesthetist present, it is essential nurses are adequately trained and have access to vital equipment such as capnography to monitor ventilation because deeply sedated patients are more likely to experience complications related to sedation. These initiatives will go some way to ensuring patients receiving nurse-administered procedural sedation and analgesia for a procedure in the cardiac catheter laboratory are cared for using consistent, safe and evidence-based practices.