241 resultados para Medical instruments
Resumo:
We propose a computationally efficient image border pixel based watermark embedding scheme for medical images. We considered the border pixels of a medical image as RONI (region of non-interest), since those pixels have no or little interest to doctors and medical professionals irrespective of the image modalities. Although RONI is used for embedding, our proposed scheme still keeps distortion at a minimum level in the embedding region using the optimum number of least significant bit-planes for the border pixels. All these not only ensure that a watermarked image is safe for diagnosis, but also help minimize the legal and ethical concerns of altering all pixels of medical images in any manner (e.g, reversible or irreversible). The proposed scheme avoids the need for RONI segmentation, which incurs capacity and computational overheads. The performance of the proposed scheme has been compared with a relevant scheme in terms of embedding capacity, image perceptual quality (measured by SSIM and PSNR), and computational efficiency. Our experimental results show that the proposed scheme is computationally efficient, offers an image-content-independent embedding capacity, and maintains a good image quality
Resumo:
In this paper we introduce a novel design for a translational medical research ecosystem. Translational medical research is an emerging field of work, which aims to bridge the gap between basic medical science research and clinical research/patient care. We analyze the key challenges of digital ecosystems for translational research, based on real world scenarios posed by the Lab for Translational Research at the Harvard Medical School and the Genomics Research Centre of the Griffith University, and show how traditional IT approaches fail to fulfill these challenges. We then introduce our design for a translational research ecosystem. Several key contributions are made: A novel approach to managing ad-hoc research ecosystems is introduced; a new security approach for translational research is proposed which allows each participating site to retain control over its data and define its own policies to ensure legal and ethical compliance; and a design for a novel interactive access control framework which allows users to easily share data, while adhering to their organization's policies is presented.
Resumo:
Liuwei Dihuang Wan (LWD), a classic Chinese medicinal formulae, has been used to improve or restore declined functions related to aging and geriatric diseases, such as impaired mobility, vision, hearing, cognition and memory. It has attracted increasingly much attention as one of the most popular and valuable herbal medicines. However, the systematic analysis of the chemical constituents of LDW is difficult and thus has not been well established. In this paper, a rapid, sensitive and reliable ultra-performance liquid chromatography with electrospray ionization quadrupole time-of-flight high-definition mass spectrometry (UPLC-ESI-Q-TOF-MS) method with automated MetaboLynx analysis in positive and negative ion mode was established to characterize the chemical constituents of LDW. The analysis was performed on a Waters UPLCTM HSS T3 using a gradient elution system. MS/MS fragmentation behavior was proposed for aiding the structural identification of the components. Under the optimized conditions, a total of 50 peaks were tentatively characterized by comparing the retention time and MS data. It is concluded that a rapid and robust platform based on UPLC-ESI-Q-TOF-MS has been successfully developed for globally identifying multiple-constituents of traditional Chinese medicine prescriptions. This is the first report on systematic analysis of the chemical constituents of LDW. This article is protected by copyright. All rights reserved.
Resumo:
Balancing the competing interests of autonomy and protection of individuals is an escalating challenge confronting an ageing Australian society. Legal and medical professionals are increasingly being asked to determine whether individuals are legally competent/capable to make their own testamentary and substitute decision-making, that is financial and/or personal/health care, decisions. No consistent and transparent competency/capacity assessment paradigm currently exists in Australia. Consequently, assessments are currently being undertaken on an ad hoc basis which is concerning as Australia’s population ages and issues of competency/capacity increase. The absence of nationally accepted competency/capacity assessment guidelines and supporting principles results in legal and medical professionals involved with competency/capacity assessment implementing individual processes tailored to their own abilities. Legal and medical approaches differ both between and within the professions. The terminology used also varies. The legal practitioner is concerned with whether the individual has the legal ability to make the decision. A medical practitioner assesses fluctuations in physical and mental abilities. The problem is that the terms competency and capacity are used interchangeably resulting in confusion about what is actually being assessed. The terminological and methodological differences subsequently create miscommunication and misunderstanding between the professions. Consequently, it is not necessarily a simple solution for a legal professional to seek the opinion of a medical practitioner when assessing testamentary and/or substitute decision-making competency/capacity. This research investigates the effects of the current inadequate testamentary and substitute decision-making assessment paradigm and whether there is a more satisfactory approach. This exploration is undertaken within a framework of therapeutic jurisprudence which promotes principles fundamentally important in this context. Empirical research has been undertaken to first, explore the effects of the current process with practising legal and medical professionals; and second, to determine whether miscommunication and misunderstanding actually exist between the professions such that it gives rise to a tense relationship which is not conducive to satisfactory competency/capacity assessments. The necessity of reviewing the adequacy of the existing competency/capacity assessment methodology in the testamentary and substitute decision-making domain will be demonstrated and recommendations for the development of a suitable process made.
Resumo:
Airborne particles have been shown to be associated with a wide range of adverse health effects, which has led to a recent increase in medical research to gain better insight into their health effects. However, accurate evaluation of the exposure-dose-response relationship is highly dependent on the ability to track actual exposure levels of people to airborne particles. This is quite a complex task, particularly in relation to submicrometer and ultrafine particles, which can vary quite significantly in terms of particle surface area and number concentrations. Therefore, suitable monitors that can be worn for measuring personal exposure to these particles are needed. This paper presents an evaluation of the metrological performance of six diffusion charger sensors, NanoTracer (Philips Aerasense) monitors, when measuring particle number and surface area concentrations, as well as particle number distribution mean when compared to reference instruments. Tests in the laboratory (by generating monodisperse and polydisperse aerosols) and in the field (using natural ambient particles) were designed to evaluate the response of these devices under both steady-state and dynamics conditions. Results showed that the NanoTracers performed well when measuring steady state aerosols, however they strongly underestimated actual concentrations during dynamic response testing. The field experiments also showed that, when the majority of the particles were smaller than 20 nm, which occurs during particle formation events in the atmosphere, the NanoTracer underestimated number concentration quite significantly. Even though the NanoTracer can be used for personal monitoring of exposure to ultrafine particles, it also has limitations which need to be considered in order to provide meaningful results.
Resumo:
The Australasian Nutrition Care Day Survey (ANCDS) reported two-in-five patients in Australian and New Zealand hospitals consume ≤50% of the offered food. The ANCDS found a significant association between poor food intake and increased in-hospital mortality after controlling for confounders (nutritional status, age, disease type and severity)1. Evidence for the effectiveness of medical nutrition therapy (MNT) in hospital patients eating poorly is lacking. An exploratory study was conducted in respiratory, neurology and orthopaedic wards of an Australian hospital. At baseline, 24-hour food intake (0%, 25%, 50%, 75%, 100% of offered meals) was evaluated for patients hospitalised for ≥2 days and not under dietetic review. Patients consuming ≤50% of offered meals due to nutrition-impact symptoms were referred to ward dietitians for MNT with food intake re-evaluated on day-7. 184 patients were observed over four weeks. Sixty-two patients (34%) consumed ≤50% of the offered meals. Simple interventions (feeding/menu assistance, diet texture modifications) improved intake to ≥75% in 30 patients who did not require further MNT. Of the 32 patients referred for MNT, baseline and day-7 data were available for 20 patients (68±17years, 65% females, BMI: 22±5kg/m2, median energy, protein intake: 2250kJ, 25g respectively). On day-7, 17 participants (85%) demonstrated significantly higher consumption (4300kJ, 53g; p<0.01). Three participants demonstrated no improvement due to ongoing nutrition-impact symptoms. “Percentage food intake” was a quick tool to identify patients in whom simple interventions could enhance intake. MNT was associated with improved dietary intake in hospital patients. Further research is needed to establish a causal relationship.
Resumo:
Background and aims The Australasian Nutrition Care Day Survey (ANCDS) reported two-in-five patients consume ≤50% of the offered food in Australian and New Zealand hospitals. After controlling for confounders (nutritional status, age, disease type and severity), the ANCDS also established an independent association between poor food intake and increased in-hospital mortality. This study aimed to evaluate if medical nutrition therapy (MNT) could improve dietary intake in hospital patients eating poorly. Methods An exploratory pilot study was conducted in the respiratory, neurology and orthopaedic wards of an Australian hospital. At baseline, percentage food intake (0%, 25%, 50%, 75%, and 100%) was evaluated for each main meal and snack for a 24-hour period in patients hospitalised for ≥2 days and not under dietetic review. Patients consuming ≤50% of offered meals due to nutrition-impact symptoms were referred to ward dietitians for MNT. Food intake was re-evaluated on the seventh day following recruitment (post-MNT). Results 184 patients were observed over four weeks; 32 patients were referred for MNT. Although baseline and post-MNT data for 20 participants (68±17years, 65% females) indicated a significant increase in median energy and protein intake post-MNT (3600kJ/day, 40g/day) versus baseline (2250kJ/day, 25g/day) (p<0.05), the increased intake met only 50% of dietary requirements. Persistent nutrition impact symptoms affected intake. Conclusion In this pilot study whilst dietary intake improved, it remained inadequate to meet participants’ estimated requirements due to ongoing nutrition-impact symptoms. Appropriate medical management and early enteral feeding could be a possible solution for such patients.
Resumo:
BACKGROUND: The prevalence of protein-energy malnutrition in older adults is reported to be as high as 60% and is associated with poor health outcomes. Inadequate feeding assistance and mealtime interruptions may contribute to malnutrition and poor nutritional intake during hospitalisation. Despite being widely implemented in practice in the United Kingdom and increasingly in Australia, there have been few studies examining the impact of strategies such as Protected Mealtimes and dedicated feeding assistant roles on nutritional outcomes of elderly inpatients. AIMS: The aim of this research was to implement and compare three system-level interventions designed to specifically address mealtime barriers and improve energy intakes of medical inpatients aged ≥65 years. This research also aimed to evaluate the sustainability of any changes to mealtime routines six months post-intervention and to gain an understanding of staff perceptions of the post-intervention mealtime experience. METHODS: Three mealtime assistance interventions were implemented in three medical wards at Royal Brisbane and Women's Hospital: AIN-only: Additional assistant-in-nursing (AIN) with dedicated nutrition role. PM-only: Multidisciplinary approach to meals, including Protected Mealtimes. PM+AIN: Combined intervention: AIN + multidisciplinary approach to meals. An action research approach was used to carefully design and implement the three interventions in partnership with ward staff and managers. Significant time was spent in consultation with staff throughout the implementation period to facilitate ownership of the interventions and increase likelihood of successful implementation. A pre-post design was used to compare the implementation and nutritional outcomes of each intervention to a pre-intervention group. Using the same wards, eligible participants (medical inpatients aged ≥65 years) were recruited to the preintervention group between November 2007 and March 2008 and to the intervention groups between January and June 2009. The primary nutritional outcome was daily energy and protein intake, which was determined by visually estimating plate waste at each meal and mid-meal on Day 4 of admission. Energy and protein intakes were compared between the pre and post intervention groups. Data were collected on a range of covariates (demographics, nutritional status and known risk factors for poor food intake), which allowed for multivariate analysis of the impact of the interventions on nutritional intake. The provision of mealtime assistance to participants and activities of ward staff (including mealtime interruptions) were observed in the pre-intervention and intervention groups, with staff observations repeated six months post-intervention. Focus groups were conducted with nursing and allied health staff in June 2009 to explore their attitudes and behaviours in response to the three mealtime interventions. These focus group discussions were analysed using thematic analysis. RESULTS: A total of 254 participants were recruited to the study (pre-intervention: n=115, AIN-only: n=58, PM-only: n=39, PM+AIN: n=42). Participants had a mean age of 80 years (SD 8), and 40% (n=101) were malnourished on hospital admission, 50% (n=108) had anorexia and 38% (n=97) required some assistance at mealtimes. Occasions of mealtime assistance significantly increased in all interventions (p<0.01). However, no change was seen in mealtime interruptions. No significant difference was seen in mean total energy and protein intake between the preintervention and intervention groups. However, when total kilojoule intake was compared with estimated requirements at the individual level, participants in the intervention groups were more likely to achieve adequate energy intake (OR=3.4, p=0.01), with no difference noted between interventions (p=0.29). Despite small improvements in nutritional adequacy, the majority of participants in the intervention groups (76%, n=103) had inadequate energy intakes to meet their estimated energy requirements. Patients with cognitive impairment or feeding dependency appeared to gain substantial benefit from mealtime assistance interventions. The increase in occasions of mealtime assistance by nursing staff during the intervention period was maintained six-months post-intervention. Staff focus groups highlighted the importance of clearly designating and defining mealtime responsibilities in order to provide adequate mealtime care. While the purpose of the dedicated feeding assistant was to increase levels of mealtime assistance, staff indicated that responsibility for mealtime duties may have merely shifted from nursing staff to the assistant. Implementing the multidisciplinary interventions empowered nursing staff to "protect" the mealtime from external interruptions, but further work is required to empower nurses to prioritise mealtime activities within their own work schedules. Staff reported an increase in the profile of nutritional care on all wards, with additional non-nutritional benefits noted including improved mobility and functional independence, and better identification of swallowing difficulties. IMPLICATIONS: The PhD research provides clinicians with practical strategies to immediately introduce change to deliver better mealtime care in the hospital setting, and, as such, has initiated local and state-wide roll-out of mealtime assistance programs. Improved nutritional intakes of elderly inpatients was observed; however given the modest effect size and reducing lengths of hospital stays, better nutritional outcomes may be achieved by targeting the hospital-to-home transition period. Findings from this study suggest that mealtime assistance interventions for elderly inpatients with cognitive impairment and/or functional dependency show promise.
Resumo:
Background: Dementia and delirium appear to be common among older patients admitted to acute hospitals, although there are few Australian data regarding these important conditions. Aim: The aim of this study was to determine the prevalence and incidence of dementia and delirium among older patients admitted to acute hospitals in Queensland and to profile these patients. Method: Prospective observational cohort study (n = 493) of patients aged 70 years and older admitted to general medical, general surgical and orthopaedic wards of four acute hospitals in Queensland between 2008 and 2010. Trained research nurses completed comprehensive geriatric assessments and obtained detailed information about each patient’s physical, cognitive and psychosocial functioning using the interRAI Acute Care and other standardised instruments. Nurses also visited patients daily to identify incident delirium. Two physicians independently reviewed patients’ medical records and assessments to establish the diagnosis of dementia and/or delirium. Results: Overall, 29.4% of patients (n = 145) were considered to have cognitive impairment, including 102 (20.7% of the total) who were considered to have dementia. This rate increased to 47.4% in the oldest patients (aged 90 years). The overall prevalence of delirium at admission was 9.7% (23.5% in patients with dementia), and the rate of incident delirium was 7.6% (14.7% in patients with dementia). Conclusion: The prevalence of dementia and delirium among older patients admitted to acute hospitals is high and is likely to increase with population aging. It is suggested that hospital design, staffing and processes should be attuned better to meet these patients’ needs.
Resumo:
Objective To compare the diagnostic accuracy of the interRAI Acute Care (AC) Cognitive Performance Scale (CPS2) and the Mini-Mental State Examination (MMSE), against independent clinical diagnosis for detecting dementia in older hospitalized patients. Design, Setting, and Participants The study was part of a prospective observational cohort study of patients aged ≥70 years admitted to four acute hospitals in Queensland, Australia, between 2008 and 2010. Recruitment was consecutive and patients expected to remain in hospital for ≥48 hours were eligible to participate. Data for 462 patients were available for this study. Measurements Trained research nurses completed comprehensive geriatric assessments and administered the interRAI AC and MMSE to patients. Two physicians independently reviewed patients’ medical records and assessments to establish the diagnosis of dementia. Indicators of diagnostic accuracy included sensitivity, specificity, predictive values, likelihood ratios and areas under receiver (AUC) operating characteristic curves. Results 85 patients (18.4%) were considered to have dementia according to independent clinical diagnosis. The sensitivity of the CPS2 [0.68 (95%CI: 0.58–0.77)] was not statistically different to the MMSE [0.75 (0.64–0.83)] in predicting physician diagnosed dementia. The AUCs for the 2 instruments were also not statistically different: CPS2 AUC = 0.83 (95%CI: 0.78–0.89) and MMSE AUC = 0.87 (95%CI: 0.83–0.91), while the CPS2 demonstrated higher specificity [0.92 95%CI: 0.89–0.95)] than the MMSE [0.82 (0.77–0.85)]. Agreement between the CPS2 and clinical diagnosis was substantial (87.4%; κ=0.61). Conclusion The CPS2 appears to be a reliable screening tool for assessing cognitive impairment in acutely unwell older hospitalized patients. These findings add to the growing body of evidence supporting the utility of the interRAI AC, within which the CPS2 is embedded. The interRAI AC offers the advantage of being able to accurately screen for both dementia and delirium without the need to use additional assessments, thus increasing assessment efficiency.
Resumo:
Aim Few Australian studies have examined the impact of dementia on hospital outcomes. The aim of this study was to determine the relative contribution of dementia to adverse outcomes in older hospital patients. Method Prospective observational cohort study (n = 493) of patients aged ≥70 years admitted to four acute hospitals in Queensland. Trained research nurses completed comprehensive geriatric assessments using standardised instruments and collected data regarding adverse outcomes. The diagnosis of dementia was established by independent physician review of patients' medical records and assessments. Results Patients with dementia (n = 102, 20.7%) were significantly older (P = 0.01), had poorer functional ability (P < 0.01), and were more likely to have delirium at admission (P < 0.01) than patients without dementia. Dementia (odds ratio = 4.8, P < 0.001) increased the risk of developing delirium during the hospital stay. Conclusion Older patients with dementia are more impaired and vulnerable than patients without dementia and are at greater risk of adverse outcomes when hospitalised.
Resumo:
Background Despite the increasing recognition that medical training tends to coincide with markedly high levels of stress and distress, there is a dearth of validated measures that are capable of gauging the prevalence of depressive symptoms among medical residents in the Arab/Islamic part of the world. Objective The aim of the present study is two-fold. First is to examine the diagnostic validity of the Patient Health Questionnaire (PHQ-9) using an Omani medical resident population in order to establish a cut-off point. Second is to compare gender, age, and residency level among Omani Medical residents who report current depressive symptomatology versus those who report as non-depressed according to PHQ-9 cut-off threshold. Results A total of 132 residents (42 males and 90 females) consented to participate in this study. The cut-off score of 12 on the PHQ-9 revealed a sensitivity of 80.6% and a specificity of 94.0%. The rate of depression, as elicited by PHQ-9, was 11.4%. The role of gender, age, and residency level was not significant in endorsing depression. Conclusion This study indicated that PHQ-9 is a reliable measure among this cross-cultural population. More studies employing robust methodology are needed to confirm this finding.
Resumo:
Purpose: To assess intrasessional and intersessional repeatability of two commercial partial coherence interferometry instruments for measuring peripheral eye lengths and to investigate the agreement between the two instruments. Methods: Central and peripheral eye lengths were determined with the IOLMaster (Carl-Zeiss Meditec AG, Jena, Germany) and the Lenstar (Haag Streit, Bern, Switzerland) in seven adults. Measurements were performed out to 35° and 30° from fixation for horizontal and vertical visual fields, respectively, in 5° intervals. An external fixation target at optical infinity was used. At least four measurements were taken at each location for each instrument, and measurements were taken at two sessions. Results: The mean intrasessional SDs for the IOLMaster along both the horizontal and vertical visual fields were 0.04 ± 0.04 mm; corresponding results for the Lenstar were 0.02 ± 0.02 mm along both fields. The intersessional SDs for the IOLMaster for the horizontal and vertical visual fields were ±0.11 and ±0.08 mm, respectively; corresponding limits for the Lenstar were ±0.05 and ±0.04 mm. The intrasessional and intersessional variability increased away from fixation. The mean differences between the two instruments were 0.01 ± 0.07 mm and 0.02 ± 0.07 mm in the horizontal and vertical visual fields, but the lengths with the Lenstar became greater than those with the IOLMaster as axial length increased (rate of approximately 0.016 mm/mm). Conclusions: Both the IOLMaster and the Lenstar demonstrated good intrasessional and intersessional repeatability for peripheral eye length measurements, with the Lenstar showing better repeatability. The Lenstar would be expected to give a slightly greater range of eye lengths than the IOLMaster across the visual field.