922 resultados para Central pulse pressure
Resumo:
Background Procedural sedation and analgesia (PSA) is used to attenuate the pain and distress that may otherwise be experienced during diagnostic and interventional medical or dental procedures. As the risk of adverse events increases with the depth of sedation induced, frequent monitoring of level of consciousness is recommended. Level of consciousness is usually monitored during PSA with clinical observation. Processed electroencephalogram-based depth of anaesthesia (DoA) monitoring devices provide an alternative method to monitor level of consciousness that can be used in addition to clinical observation. However, there is uncertainty as to whether their routine use in PSA would be justified. Rigorous evaluation of the clinical benefits of DoA monitors during PSA, including comprehensive syntheses of the available evidence, is therefore required. One potential clinical benefit of using DoA monitoring during PSA is that the technology could improve patient safety by reducing sedation-related adverse events, such as death or permanent neurological disability. We hypothesise that earlier identification of lapses into deeper than intended levels of sedation using DoA monitoring leads to more effective titration of sedative and analgesic medications, and results in a reduction in the risk of adverse events caused by the consequences of over-sedation, such as hypoxaemia. The primary objective of this review is to determine whether using DoA monitoring during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). Other potential clinical benefits of using DoA monitoring devices during sedation will be assessed as secondary outcomes. Methods/design Electronic databases will be systematically searched for randomized controlled trials comparing the use of depth of anaesthesia monitoring devices with clinical observation of level of consciousness during PSA. Language restrictions will not be imposed. Screening, study selection and data extraction will be performed by two independent reviewers. Disagreements will be resolved by discussion. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of DoA monitoring during PSA within hospital settings.
Resumo:
Purpose To compare small nerve fiber damage in the central cornea and whorl area in participants with diabetic peripheral neuropathy (DPN) and to examine the accuracy of evaluating these 2 anatomical sites for the diagnosis of DPN. Methods A cohort of 187 participants (107 with type 1 diabetes and 80 controls) was enrolled. The neuropathy disability score (NDS) was used for the identification of DPN. The corneal nerve fiber length at the central cornea (CNFLcenter) and whorl (CNFLwhorl) was quantified using corneal confocal microscopy and a fully automated morphometric technique and compared according to the DPN status. Receiver operating characteristic analyses were used to compare the accuracy of the 2 corneal locations for the diagnosis of DPN. Results CNFLcenter and CNFLwhorl were able to differentiate all 3 groups (diabetic participants with and without DPN and controls) (P < 0.001). There was a weak but significant linear relationship for CNFLcenter and CNFLwhorl versus NDS (P < 0.001); however, the corneal location x NDS interaction was not statistically significant (P = 0.17). The area under the receiver operating characteristic curve was similar for CNFLcenter and CNFLwhorl (0.76 and 0.77, respectively, P = 0.98). The sensitivity and specificity of the cutoff points were 0.9 and 0.5 for CNFLcenter and 0.8 and 0.6 for CNFLwhorl. Conclusions Small nerve fiber pathology is comparable at the central and whorl anatomical sites of the cornea. Quantification of CNFL from the corneal center is as accurate as CNFL quantification of the whorl area for the diagnosis of DPN.
Resumo:
BACKGROUND Quantification of the disease burden caused by different risks informs prevention by providing an account of health loss different to that provided by a disease-by-disease analysis. No complete revision of global disease burden caused by risk factors has been done since a comparative risk assessment in 2000, and no previous analysis has assessed changes in burden attributable to risk factors over time. METHODS We estimated deaths and disability-adjusted life years (DALYs; sum of years lived with disability [YLD] and years of life lost [YLL]) attributable to the independent effects of 67 risk factors and clusters of risk factors for 21 regions in 1990 and 2010. We estimated exposure distributions for each year, region, sex, and age group, and relative risks per unit of exposure by systematically reviewing and synthesising published and unpublished data. We used these estimates, together with estimates of cause-specific deaths and DALYs from the Global Burden of Disease Study 2010, to calculate the burden attributable to each risk factor exposure compared with the theoretical-minimum-risk exposure. We incorporated uncertainty in disease burden, relative risks, and exposures into our estimates of attributable burden. FINDINGS In 2010, the three leading risk factors for global disease burden were high blood pressure (7·0% [95% uncertainty interval 6·2-7·7] of global DALYs), tobacco smoking including second-hand smoke (6·3% [5·5-7·0]), and alcohol use (5·5% [5·0-5·9]). In 1990, the leading risks were childhood underweight (7·9% [6·8-9·4]), household air pollution from solid fuels (HAP; 7·0% [5·6-8·3]), and tobacco smoking including second-hand smoke (6·1% [5·4-6·8]). Dietary risk factors and physical inactivity collectively accounted for 10·0% (95% UI 9·2-10·8) of global DALYs in 2010, with the most prominent dietary risks being diets low in fruits and those high in sodium. Several risks that primarily affect childhood communicable diseases, including unimproved water and sanitation and childhood micronutrient deficiencies, fell in rank between 1990 and 2010, with unimproved water and sanitation accounting for 0·9% (0·4-1·6) of global DALYs in 2010. However, in most of sub-Saharan Africa childhood underweight, HAP, and non-exclusive and discontinued breastfeeding were the leading risks in 2010, while HAP was the leading risk in south Asia. The leading risk factor in Eastern Europe, most of Latin America, and southern sub-Saharan Africa in 2010 was alcohol use; in most of Asia, North Africa and Middle East, and central Europe it was high blood pressure. Despite declines, tobacco smoking including second-hand smoke remained the leading risk in high-income north America and western Europe. High body-mass index has increased globally and it is the leading risk in Australasia and southern Latin America, and also ranks high in other high-income regions, North Africa and Middle East, and Oceania. INTERPRETATION Worldwide, the contribution of different risk factors to disease burden has changed substantially, with a shift away from risks for communicable diseases in children towards those for non-communicable diseases in adults. These changes are related to the ageing population, decreased mortality among children younger than 5 years, changes in cause-of-death composition, and changes in risk factor exposures. New evidence has led to changes in the magnitude of key risks including unimproved water and sanitation, vitamin A and zinc deficiencies, and ambient particulate matter pollution. The extent to which the epidemiological shift has occurred and what the leading risks currently are varies greatly across regions. In much of sub-Saharan Africa, the leading risks are still those associated with poverty and those that affect children.
Resumo:
AIM To investigate the number of hypertensive patients, the optometrist is able to identify by routinely taking blood pressure (BP) measurements for patients in "at -risk" groups, and to sample patients' opinions regarding in -office BP measurement. Many of the optometrists in Saudi Arabia practice in optical stores. These stores are wide spread, easily accessible and seldom need appointments. The expanding role of the optometrist as a primary health care provider (PHCP) and the increasing global prevalence of hypertension, highlight the need for an integrated approach towards detecting and monitoring hypertension. METHODS Automated BP measurements were made twice (during the same session) at five selected optometry practices using a validated BP monitor (Omron M6) to assess the number of patients with high BP (HBP) - in at -risk groups -visiting the eye clinic routinely. Prior to data collection, practitioners underwent a two-day training workshop by a cardiologist on hypertension and how to obtain accurate BP readings. A protocol for BP measurement was distributed and retained in all participating clinics. The general attitude towards cardiovascular health of 480 patients aged 37.2 (依12.4)y and their opinion towards in-office BP measurement was assessed using a self -administered questionnaire. RESULTS A response rate of 83.6% was obtained for the survey. Ninety -three of the 443 patients (21.0% ) tested for BP in this study had HBP. Of these, (62 subjects) 67.7% were unaware of their HBP status. Thirty of the 105 subjects (28.6%) who had previously been diagnosed with HBP, still had HBP at the time of this study, and only 22 (73.3%) of these patients were on medication. Also, only 25% of the diagnosed hypertensive patients owned a BP monitor. CONCLUSION Taking BP measurements in optometry practices, we were able to identify one previously undiagnosed patient with HBP for every 8 adults tested. We also identified 30 of 105 previously diagnosed patients whose BP was poorly controlled, twenty-two of whom were on medication. The patients who participated in this study were positively disposed toward the routine measurement of BP by optometrists.
Resumo:
Aim Paediatric haematopoietic stem cell donors undergo non-therapeutic procedures and endure known and unknown physical and psychosocial risks for the benefit of a family member. One ethical concern is the risk they may be pressured by parents or health professionals to act as a donor. This paper adds to what is known about this topic by presenting the views of health professionals. Methods This qualitative study involved semi-structured interviews with 14 health professionals in Australasia experienced in dealing with paediatric donors. Transcripts were analysed using established qualitative methodologies. Results Health professionals considered that some paediatric donors experience pressure to donate. Situations were identified that were likely to increase the risk of pressure being placed on donors and views were expressed about the ethical ‘appropriateness’ of these practices within the family setting. Conclusions Children may be subject to pressure from family and health professionals to be tested and act as donors, Therefore, our ethical obligation to these children extends to implementing donor focused processes – including independent health professionals and the appointment of a donor advocate – to assist in detecting and addressing instances of inappropriate pressure being placed on a child.
Resumo:
Species of fleshy-fruited Myrtaceae are generally associated with humid environments and their vegetative anatomy is mainly mesophytic. Myrceugenia rufa is an endemic and rare species from arid zones of the coast of central Chile and there are no anatomical studies regarding its leaf anatomy and environmental adaptations. Here we describe the leaf micromorphology and anatomy of the species using standard protocols for light and scanning electron microscopy. The leaf anatomy of M. rufa matches that of other Myrtaceae, such as presence of druses, schizogenous secretory ducts and internal phloem. Leaves of M. rufa exhibit a double epidermis, thick cuticle, abundant unicellular hairs, large substomatal chambers covered by trichomes and a dense palisade parenchyma. Leaf characters of M. rufa confirm an anatomical adaptation to xerophytic environments.
Resumo:
Background An important potential clinical benefit of using capnography monitoring during procedural sedation and analgesia (PSA) is that this technology could improve patient safety by reducing serious sedation-related adverse events, such as death or permanent neurological disability, which are caused by inadequate oxygenation. The hypothesis is that earlier identification of respiratory depression using capnography leads to a change in clinical management that prevents hypoxaemia. As inadequate oxygenation/ventilation is the most common reason for injury associated with PSA, reducing episodes of hypoxaemia would indicate that using capnography would be safer than relying on standard monitoring alone. Methods/design The primary objective of this review is to determine whether using capnography during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). A secondary objective of this review is to determine whether changes in the clinical management of sedated patients are the mediating factor for any observed impact of capnography monitoring on the rate of hypoxaemia. The potential adverse effect of capnography monitoring that will be examined in this review is the rate of inadequate sedation. Electronic databases will be searched for parallel, crossover and cluster randomised controlled trials comparing the use of capnography with standard monitoring alone during PSA that is administered in the hospital setting. Studies that included patients who received general or regional anaesthesia will be excluded from the review. Non-randomised studies will be excluded. Screening, study selection and data extraction will be performed by two reviewers. The Cochrane risk of bias tool will be used to assign a judgment about the degree of risk. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of capnography monitoring during PSA within hospital settings. Systematic review registration: PROSPERO CRD42015023740
Resumo:
Introduction Vascular access devices (VADs), such as peripheral or central venous catheters, are vital across all medical and surgical specialties. To allow therapy or haemodynamic monitoring, VADs frequently require administration sets (AS) composed of infusion tubing, fluid containers, pressure-monitoring transducers and/or burettes. While VADs are replaced only when necessary, AS are routinely replaced every 3–4 days in the belief that this reduces infectious complications. Strong evidence supports AS use up to 4 days, but there is less evidence for AS use beyond 4 days. AS replacement twice weekly increases hospital costs and workload. Methods and analysis This is a pragmatic, multicentre, randomised controlled trial (RCT) of equivalence design comparing AS replacement at 4 (control) versus 7 (experimental) days. Randomisation is stratified by site and device, centrally allocated and concealed until enrolment. 6554 adult/paediatric patients with a central venous catheter, peripherally inserted central catheter or peripheral arterial catheter will be enrolled over 4 years. The primary outcome is VAD-related bloodstream infection (BSI) and secondary outcomes are VAD colonisation, AS colonisation, all-cause BSI, all-cause mortality, number of AS per patient, VAD time in situ and costs. Relative incidence rates of VAD-BSI per 100 devices and hazard rates per 1000 device days (95% CIs) will summarise the impact of 7-day relative to 4-day AS use and test equivalence. Kaplan-Meier survival curves (with log rank Mantel-Cox test) will compare VAD-BSI over time. Appropriate parametric or non-parametric techniques will be used to compare secondary end points. p Values of <0.05 will be considered significant.
Resumo:
This work investigates the academic stress and mental health of Indian high school students and the associations between various psychosocial factors and academic stress. A total of190 students from grades 11 and 12 (mean age: 16.72 years) from three government-aided and three private schools in Kolkata India were surveyed in the study. Data collection involved using a specially designed structured questionnaire as well as the General Health Questionnaire. Nearly two-thirds (63.5%) of the students reported stress due to academic pressure – with no significant differences across gender, age, grade, and several other personal factors. About two-thirds (66%) of the students reported feeling pressure from their parents for better academic performance. The degree of parental pressure experienced differed significantly across the educational levels of the parents, mother’s occupation, number of private tutors, and academic performance. In particular, children of fathers possessing a lower education level (non-graduates) were found to be more likely to perceive pressure for better academic performance. About one-thirds (32.6%) of the students were symptomatic of psychiatric caseness and 81.6% reported examination-related anxiety. Academic stress was positively correlated with parental pressure and psychiatric problems, while examination-related anxiety also was positively related to psychiatric problems. Academic stress is a serious issue which affects nearly two thirds of senior high school students in Kolkata. Potential methods for combating the challenges of academic pressure are suggested.
Resumo:
In this work we discuss the development of a mathematical model to predict the shift in gas composition observed over time from a producing CSG (coal seam gas) well, and investigate the effect that physical properties of the coal seam have on gas production. A detailed (local) one-dimensional, two-scale mathematical model of a coal seam has been developed. The model describes the competitive adsorption and desorption of three gas species (CH4, CO2 and N2) within a microscopic, porous coal matrix structure. The (diffusive) flux of these gases between the coal matrices (microscale) and a cleat network (macroscale) is accounted for in the model. The cleat network is modelled as a one-dimensional, volume averaged, porous domain that extends radially from a central well. Diffusive and advective transport of the gases occurs within the cleat network, which also contains liquid water that can be advectively transported. The water and gas phases are assumed to be immiscible. The driving force for the advection in the gas and liquid phases is taken to be a pressure gradient with capillarity also accounted for. In addition, the relative permeabilities of the water and gas phases are considered as functions of the degree of water saturation.
Resumo:
The Bruneau–Jarbidge eruptive center of the central Snake River Plain in southern Idaho, USA produced multiple rhyolite lava flows with volumes of <10 km³ to 200 km³ each from ~11.2 to 8.1 Ma, most of which follow its climactic phase of large-volume explosive volcanism, represented by the Cougar Point Tuff, from 12.7 to 10.5 Ma. These lavas represent the waning stages of silicic volcanism at a major eruptive center of the Yellowstone hotspot track. Here we provide pyroxene compositions and thermometry results from several lavas that demonstrate that the demise of the silicic volcanic system was characterized by sustained, high pre-eruptive magma temperatures (mostly ≥950 °C) prior to the onset of exclusively basaltic volcanism at the eruptive center. Pyroxenes display a variety of textures in single samples, including solitary euhedral crystals as well as glomerocrysts, crystal clots and annealed microgranular inclusions of pyroxene ±magnetite± plagioclase. Pigeonite and augite crystals are unzoned, and there are no detectable differences in major and minor element compositions according to textural variety — mineral compositions in the microgranular inclusions and crystal clots are identical to those of phenocrysts in the host lavas. In contrast to members of the preceding Cougar Point Tuff that host polymodal glass and mineral populations, pyroxene compositions in each of the lavas are characterized by single rather than multiple discrete compositional modes. Collectively, the lavas reproduce and extend the range of Fe–Mg pyroxene compositional modes observed in the Cougar Point Tuff to more Mg-rich varieties. The compositionally homogeneous populations of pyroxene in each of the lavas, as well as the lack of core-to-rim zonation in individual crystals suggest that individual eruptions each were fed by compositionally homogeneous magma reservoirs, and similarities with the Cougar Point Tuff suggest consanguinity of such reservoirs to those that supplied the polymodal Cougar Point Tuff. Pyroxene thermometry results obtained using QUILF equilibria yield pre-eruptive magma temperatures of 905 to 980 °C, and individual modes consistently record higher Ca content and higher temperatures than pyroxenes with equivalent Fe–Mg ratios in the preceding Cougar Point Tuff. As is the case with the Cougar Point Tuff, evidence for up-temperature zonation within single crystals that would be consistent with recycling of sub- or near-solidus material from antecedent magma reservoirs by rapid reheating is extremely rare. Also, the absence of intra-crystal zonation, particularly at crystal rims, is not easily reconciled with cannibalization of caldera fill that subsided into pre-eruptive reservoirs. The textural, compositional and thermometric results rather are consistent with minor re-equilibration to higher temperatures of the unerupted crystalline residue from the explosive phase of volcanism, or perhaps with newly generated magmas from source materials very similar to those for the Cougar Point Tuff. Collectively, the data suggest that most of the pyroxene compositional diversity that is represented by the tuffs and lavas was produced early in the history of the eruptive center and that compositions across this range were preserved or duplicated through much of its lifetime. Mineral compositions and thermometry of the multiple lavas suggest that unerupted magmas residual to the explosive phase of volcanism may have been stored at sustained, high temperatures subsequent to the explosive phase of volcanism. If so, such persistent high temperatures and large eruptive magma volumes likewise require an abundant and persistent supply of basalt magmas to the lower and/or mid-crust, consistent with the tectonic setting of a continental hotspot.
Resumo:
An improved understanding of the characteristics of the pre-discharge current pulses in GIS will lead to improved analyses of the results from the UHF partial discharge detection method. This paper presents the characteristics of the first pre-discharge current pulses from a point-to-plain geometry at 1 bar absolute under both polarities of a 1.1/80 us lightning impulse. The analysis has shown that the pre-discharge current wave shape, peak current magnitude and charge is effected by the instantaneous voltage at which the pre- discharge took place as well as the polarity of the active electrode. The measured results show that protrusions on the electrodes have slower wave shape parameters than those reported for free conducting particles.
Resumo:
As one of the largest sources of greenhouse gas (GHG) emissions, the building and construction sector is facing increasing pressure to reduce its life cycle GHG emissions. One central issue in striving towards reduced carbon emissions in the building and construction sector is to develop a practical and meaningful yardstick to assess and communicate GHG results through carbon labelling. The idea of carbon labelling schemes for building materials is to trigger a transition to a low carbon future by switching consumer-purchasing habits to low-carbon alternatives. As such, failing to change purchasing pattern and behaviour can be disastrous to carbon labelling schemes. One useful tool to assist customers to change their purchasing behaviour is benchmarking, which has been very commonly used in ecolabelling schemes. This paper analyses the definition and scope of benchmarking in the carbon labelling schemes for building materials. The benchmarking process has been examined within the context of carbon labelling. Four practical issues for the successful implementation of benchmarking, including the availability of benchmarks and databases, the usefulness of different types of benchmarks and the selection of labelling practices have also been clarified.
Resumo:
In this study of 638 Australian nurses, compliance to hand hygiene (HH), as defined by the “five moments” recommended by the World Health Organisation (2009), was examined. Hypotheses focused on the extent to which time pressure reduces compliance and safety climate (operationalised in relation to HH using colleagues, manager, and hospital as referents) increases compliance. It also was proposed that HH climate would interact with time pressure, such that the negative effects of time pressure would be less marked when HH climate is high. The extent to which the three HH climate variables would interact among each other, either in the form of boosting or compensatory effects, was tested in an exploratory manner. A prospective research design was used in which time pressure and the HH climate variables were assessed at Time 1 and compliance was assessed by self-report two weeks later. Compliance was high but varied significantly across the 5 HH Moments, suggesting that nurses make distinctions between inherent and elective HH and also seemed to engage in some implicit rationing of HH. Time pressure dominated the utility of HH climate to have its positive impact on compliance. The most conducive workplace for compliance was one low in time pressure and high in HH climate. Colleagues were very influential in determining compliance, more so than the manager and hospital. Manager and hospital support for HH enhanced the positive effects of colleagues on compliance. Providing training and enhancing knowledge was important, not just for compliance, but for safety climate.
Resumo:
Introduction: Patients with rheumatoid arthritis (RA) have increased risk of cardiovascular (CV) events. We sought to test the hypothesis that due to increased inflammation, CV disease and risk factors are associated with increased risk of future RA development. Methods: The population-based Nord-Trøndelag health surveys (HUNT) were conducted among the entire adult population of Nord-Trøndelag, Norway. All inhabitants 20 years or older were invited, and information was collected through comprehensive questionnaires, a clinical examination, and blood samples. In a cohort design, data from HUNT2 (1995-1997, baseline) and HUNT3 (2006-2008, follow-up) were obtained to study participants with RA (n = 786) or osteoarthritis (n = 3,586) at HUNT3 alone, in comparison with individuals without RA or osteoarthritis at both times (n = 33,567). Results: Female gender, age, smoking, body mass index, and history of previous CV disease were associated with self-reported incident RA (previous CV disease: odds ratio 1.52 (95% confidence interval 1.11-2.07). The findings regarding previous CV disease were confirmed in sensitivity analyses excluding participants with psoriasis (odds ratio (OR) 1.70 (1.23-2.36)) or restricting the analysis to cases with a hospital diagnosis of RA (OR 1.90 (1.10-3.27)) or carriers of the shared epitope (OR 1.76 (1.13-2.74)). History of previous CV disease was not associated with increased risk of osteoarthritis (OR 1.04 (0.86-1.27)). Conclusion: A history of previous CV disease was associated with increased risk of incident RA but not osteoarthritis.