27 resultados para low risk population
em University of Queensland eSpace - Australia
Resumo:
This study examined the oral sensitivity and feeding skills of low-risk pre-term infants at 11-17 months corrected age. Twenty pre-term infants (PT) born between 32 and 37 weeks at birth without any medical comorbidities were assessed. All of this PT group received supplemental nasogastric (NG) tube feeds during their birth-stay in hospital. A matched control group of 10 healthy full-term infants (FT) was also assessed. Oral sensitivity and feeding skills were assessed during a typical mealtime using the Royal Children's Hospital Oral Sensitivity Checklist (OSC) and the Pre-Speech Assessment Scale (PSAS). Results demonstrated that, at 11-17 months corrected age, the PT group displayed significantly more behaviours suggestive of altered oral sensitivity and facial defensiveness, and a trend of more delayed feeding development than the FT group. Further, results demonstrated that, relative to the FT group, pre-term infants who received greater than 3 weeks of NG feeding (PT>3NG) displayed significantly more facial defensive behaviour, and displayed significant delays across more aspects of their feeding development than pre-term infants who received less than 2 weeks of NG feeding (PT
Resumo:
Background: Early detection of melanoma has been encouraged in Queensland for many years, yet little is known about the patterns of detection and the way in which they relate to tumor thickness. Objective: Our purpose was to describe current patterns of melanoma detection in Queensland. Methods: This was a population-based study, comprising 3772 Queensland residents diagnosed with a histologically confirmed melanoma between 2000 and 2003. Results: Almost half (44.0%) of the melanomas were detected by the patients themselves, with physicians detecting one fourth (25.3%) and partners one fifth (18.6%). Melanomas detected by doctors were more likely to be thin (\0.75 mm) than those detected by the patient or other layperson. Melanomas detected during a deliberate skin examination were thinner than those detected incidentally. Limitations: Although a participation rate of 78% was achieved, as in any survey, nonresponse bias cannot be completely excluded, and the ability of the results to be generalized to other geographical areas is unknown. Conclusion: There are clear differences in the depth distribution of melanoma in terms of method of detection and who detects the lesions that are consistent with, but do not automatically lead to, the conclusion that promoting active methods of detection may be beneficial. ( J Am Acad Dermatol 2006;54:783-92.)
Resumo:
A longitudinal capture-mark-recapture study was conducted to determine the temporal dynamics of rabbit haemorrhagic disease (RHD) in a European rabbit (Oryctolagus cuniculus) population of low to moderate density on sand-hill country in the lower North Island of New Zealand. A combination of sampling ( trapping and radio-tracking) and diagnostic (cELISA, PCR and isotype ELISA) methods was employed to obtain data weekly from May 1998 until June 2001. Although rabbit haemorrhagic disease virus ( RHDV) infection was detected in the study population in all 3 years, disease epidemics were evident only in the late summer or autumn months in 1999 and 2001. Overall, 20% of 385 samples obtained from adult animals older than 11 weeks were seropositive. An RHD outbreak in 1999 contributed to an estimated population decline of 26%. A second RHD epidemic in February 2001 was associated with a population decline of 52% over the subsequent month. Following the outbreaks, the seroprevalence in adult survivors was between 40% and 50%. During 2000, no deaths from RHDV were confirmed and mortalities were predominantly attributed to predation. Influx of seronegative immigrants was greatest in the 1999 and 2001 breeding seasons, and preceded the RHD epidemics in those years. Our data suggest that RHD epidemics require the population immunity level to fall below a threshold where propagation of infection can be maintained through the population.
Resumo:
In wildlife management, the program of monitoring will depend on the management objective. If the objective is damage mitigation, then ideally it is damage that should be monitored. Alternatively, population size (N) can be used as a surrogate for damage, but the relationship between N and damage obviously needs to be known. If the management objective is a sustainable harvest, then the system of monitoring will depend on the harvesting strategy. In general, the harvest strategy in all states has been to offer a quota that is a constant proportion of population size. This strategy has a number of advantages over alternative strategies, including a low risk of over- or underharvest in a stochastic environment, simplicity, robustness to bias in population estimates and allowing harvest policy to be proactive rather than reactive. However, the strategy requires an estimate of absolute population size that needs to be made regularly for a fluctuating population. Trends in population size and in various harvest statistics, while of interest, are secondary. This explains the large research effort in further developing accurate estimation methods for kangaroo populations. Direct monitoring on a large scale is costly. Aerial surveys are conducted annually at best, and precision of population estimates declines with the area over which estimates are made. Management at a fine scale (temporal or spatial) therefore requires other monitoring tools. Indirect monitoring through harvest statistics and habitat models, that include rainfall or a greenness index from satellite imagery, may prove useful.
Resumo:
This study used a novel cue exposure paradigm to investigate the differences between high- and low-risk drinkers in their desire to drink during a drinking session. Fifty-three self-selected participants were assigned to high- or low-risk drinking groups based on their self-reported consumption of alcohol, then compared on their desire to drink over a 90 min paced drinking session. High-risk drinkers showed increasing desire over the session, while low-risk drinkers' desire began to decrease after only a short drinking period. The perceived and actual effects of the alcohol did not appear to be able to account for the difference. Results are discussed with reference to issues of impaired control. Suggestions for future research directions are also offered.
Resumo:
Objective: Whole-body skin self-examination (SSE) with presentation of suspicious lesions to a physician may improve early detection of melanoma. The aim of this study was to establish the prevalence and determinants of SSE in a high-risk population in preparation for a community-based randomised controlled trial of screening for melanoma. Methods: A telephone survey reached 3110 residents older than 30 years (overall response rate of 66.9%) randomly selected from 18 regional communities in Queensland, Australia. Results: Overall, 804 (25.9%) participants reported whole-body SSE within the past 12 months and 1055 (33.9%) within the past three years. Whole-body SSE was associated in multivariate logistic regression analysis with younger age (< 50 years); higher education; having received either a whole-body skin examination, recommendation or instruction on SSE by a primary care physician; giving skin checks a high priority; concern about skin cancer and a personal history of skin cancer. Conclusion: Overall, the prevalence of SSE in the present study is among the highest yet observed in Australia, with about one-third of the adult population reporting whole-body SSE in the past three years. People over 50 years, who are at relatively higher risk for skin cancer, currently perform SSE less frequently than younger people.
Resumo:
Systematic protocols that use decision rules or scores arc, seen to improve consistency and transparency in classifying the conservation status of species. When applying these protocols, assessors are typically required to decide on estimates for attributes That are inherently uncertain, Input data and resulting classifications are usually treated as though they arc, exact and hence without operator error We investigated the impact of data interpretation on the consistency of protocols of extinction risk classifications and diagnosed causes of discrepancies when they occurred. We tested three widely used systematic classification protocols employed by the World Conservation Union, NatureServe, and the Florida Fish and Wildlife Conservation Commission. We provided 18 assessors with identical information for 13 different species to infer estimates for each of the required parameters for the three protocols. The threat classification of several of the species varied from low risk to high risk, depending on who did the assessment. This occurred across the three Protocols investigated. Assessors tended to agree on their placement of species in the highest (50-70%) and lowest risk categories (20-40%), but There was poor agreement on which species should be placed in the intermediate categories, Furthermore, the correspondence between The three classification methods was unpredictable, with large variation among assessors. These results highlight the importance of peer review and consensus among multiple assessors in species classifications and the need to be cautious with assessments carried out 4), a single assessor Greater consistency among assessors requires wide use of training manuals and formal methods for estimating parameters that allow uncertainties to be represented, carried through chains of calculations, and reported transparently.
Resumo:
Nucleic acid amplification tests (NAATs) for the detection of Neisseria gonorrhoeae became available in the early 1990s. Although offering several advantages over traditional detection methods, N. gonorrhoeae NAATs do have some limitations. These include cost, risk of carryover contamination, inhibition, and inability to provide antibiotic resistance data. In addition, there are sequence-related limitations that are unique to N. gonorrhoeae NAATs. In particular, false-positive results are a major consideration. These primarily stem from the frequent horizontal genetic exchange occurring within the Neisseria genus, leading to commensal Neisseria species acquiring N. gonorrhoeae genes. Furthermore, some N. gonorrhoeae subtypes may lack specific sequences targeted by a particular NAAT. Therefore, NAAT false-negative results because of sequence variation may occur in some gonococcal populations. Overall, the N. gonorrhoeae species continues to present a considerable challenge for molecular diagnostics. The need to evaluate N. gonorrhoeae NAATs before their use in any new patient population and to educate physicians on the limitations of these tests is emphasized in this review.
Resumo:
Background Cardiac disease is the principal cause of death in patients with chronic kidney disease (CKD). Ischemia at dobutamine stress echocardiography (DSE) is associated with adverse events in these patients. We sought the efficacy of combining clinical risk evaluation with DSE. Methods We allocated 244 patients with CKD (mean age 54 years, 140 men, 169 dialysis-dependent at baseline) into low- and high-risk groups based on two disease-specific scores and the Framingham risk model. All underwent DSE and were further stratified according to DSE results. Patients were followed over 20 +/- 14 months for events (death, myocardial infarction, acute coronary syndrome). Results There were 49 deaths and 32 cardiac events. Using the different clinical scores, allocation of high risk varied from 34% to 79% of patients, and 39% to 50% of high-risk patients had an abnormal DSE. In the high-risk groups, depending on the clinical score chosen, 25% to 44% with an abnormal DSE had a cardiac event, compared with 8% to 22% with a.normal DSE. Cardiac events occurred in 2.0%, 3.1 %, and 9.7% of the low-risk patients, using the two disease-specific and Framingham scores, respectively, and DSE results did not add to risk evaluation in this subgroup. Independent DSE predictors of cardiac events were a lower resting diastolic blood pressure, angina during the test, and the combination of ischemia with resting left ventricular dysfunction. Conclusion In CKD patients, high-risk findings by DSE can predict outcome. A stepwise strategy of combining clinical risk scores with DSE for CAD screening in CKD reduces the number of tests required and identifies a high-risk subgroup among whom DSE results more effectively stratify high and low risk.
Resumo:
Aims: To evaluate efficacy of a pathway-based quality improvement intervention on appropriate prescribing of the low molecular weight heparin, enoxaparin, in patients with varying risk categories of acute coronary syndrome (ACS). Methods: Rates of enoxaparin use retrospectively evaluated before and after pathway implementation at an intervention hospital were compared to concurrent control patients at a control hospital; both were community hospitals in south-east Queensland. The study population was a group of randomly selected patients (n = 439) admitted to study hospitals with a discharge diagnosis of chest pain, angina, or myocardial infarction, and stratified into high, intermediate, low-risk ACS or non-cardiac chest pain: 146 intervention patients (September-November 2003), 147 historical controls (August-December 2001) at the intervention hospital; 146 concurrent controls (September-November 2003) at the control hospital. Interventions were active implementation of a user-modified clinical pathway coupled with an iterative education programme to medical staff versus passive distribution of a similar pathway without user modification or targeted education. Outcome measures were rates of appropriate enoxaparin use in high-risk ACS patients and rates of inappropriate use in intermediate and low-risk patients. Results: Appropriate use of enoxaparin in high-risk ACS patients was above 90% in all patient groups. Inappropriate use of enoxaparin was significantly reduced as a result of pathway use in intermediate risk (9% intervention patients vs 75% historical controls vs 45% concurrent controls) and low-risk patients (9% vs 62% vs 41%; P < 0.001 for all comparisons). Pathway use was associated with a 3.5-fold (95% CI: 1.3-9.1; P = 0.012) increase in appropriate use of enoxaparin across all patient groups. Conclusion: Active implementation of an acute chest pain pathway combined with continuous education reduced inappropriate use of enoxaparin in patients presenting with intermediate or low-risk ACS.
Resumo:
Objective To assess the effect of glucose control on the rate of growth of fetuses in women with pregestational diabetes mellitus (Types 1 and 2). Methods All pregestational diabetic women booked at Mater Mothers’ Hospital, Brisbane, Australia, between 1 January 1994 and 31 December 2002, were included. Pregnancies with congenital fetal anomalies, multiple pregnancies, and pregnancies terminated prior to 20 weeks’ gestation were excluded. Dating scans were performed before 14 weeks’ gestation and serial scans were performed at 18, 24, 28, 32 and 36 weeks. Fetal parameters, including biparietal diameter, femur length and abdominal circumference, were recorded. The daily growth rates for biparietal diameter, femur length, and fetal abdominal area were calculated and compared with those in a low-risk (non-diabetic) population. The growth rates in fetuses of women with satisfactory diabetic control (HbA1c
Resumo:
This study evaluated the effectiveness of the Problem Solving For Life program as it universal approach to the prevention of adolescent depression. Short-term results indicated that participants with initially elevated depressions scores (high risk) who received the intervention showed a significantly greater decrease in depressive symptoms and increase in life problem-solving scores from pre- to postintervention compared with a high-risk control group. Low-risk participants who received the intervention reported a small but significant decrease in depression scores over the intervention period, whereas the low-risk controls reported an increase in depression scores. The low-risk group reported a significantly greater increase in problem-solving scores over the intervention period compared with low-risk controls. These results were not maintained, however, at 12-month follow-up.
Resumo:
This study describes the discharge destination, basic and instrumental activities of daily living (ADL), community reintegration and generic health status of people after stroke, and explored whether sociodemographic and clinical characteristics were associated with these outcomes. Participants were 51 people, with an initial stroke, admitted to an acute hospital and discharged to the community. Admission and discharge data were obtained by chart review. Follow-up status was determined by telephone interview using the Modified Barthel Index, the Assessment of Living Skills and Resources, the Reintegration to Normal Living Index, and the Short-Form Health Survey (SF-36). At follow up, 57% of participants were independent in basic ADL, 84% had a low risk of experiencing instrumental ADL difficulties, most had few concerns with community reintegration, and SF-36 physical functioning and vitality scores were lower than normative values. At follow up, poorer discharge basic ADL status was associated with poorer instrumental ADL and community reintegration status, and older participants had poorer instrumental ADL, community reintegration and physical functioning. Occupational therapists need to consider these outcomes when planning inpatient and post-discharge intervention for people after stroke.
Resumo:
Objectives: To document and describe the effects of flammable liquid burns in children. To identify the at risk population in order to tailor a burns prevention programme. Design, patients and setting: Retrospective study with information obtained from the departmental database of children treated at the burns centre at The Royal Children's Hospital, Brisbane between August 1997 and October 2002. Main outcome measures: Number and ages of children burned, risk factors contributing to the accident, injuries sustained, treatment required and long-term sequelae. Results: Fifty-nine children sustained flammable liquid burns (median age 10.5 years), with a clear preponderance of males (95%). The median total body surface area burned was 8% (range 0.5-70%). Twenty-seven (46%) of the patients required debridement and grafting. Hypertrophic scars occurred in 56% of the children and contractures in 14%, of which all of the latter required surgical release. Petrol was the causative liquid in the majority (83%) of cases. Conclusions: The study identified the population most at risk of sustaining flammable liquid burns were young adolescent males. In the majority of cases these injuries were deemed preventable. (C) 2003 Elsevier Science Ltd and ISBI. All rights reserved.